Creating a New Orion Package

The easiest way to start a new package containing cubes and floes is to generate a basic template using ocli ocli packages init. This template provides a consistent starting point for creating cubes and floes. It also sets up helpful tools for running tests.

After you have followed the directions in this section, see Orion Integration for more details on packaging, linting and detection, hardware requirements, parallelism, scheduling, and other run-time considerations.

Features

The ocli packages init command:

  • Sets up a skeleton of an Orion package containing cubes and floes.

  • Provides simple example cube and floe.

  • Supports testing setup using PyTest including working tests for the example cube and floe.

  • Provides a setup.py file with commands to run tests or build the package.

  • Provides version configuration via storing the version only in the module’s __init__.py file.

Requirements

  • Python 3.7 - 3.9 (For your specific platform, see Prerequisites.) We recommend starting with a clean conda environment.

  • orionplatform_library version 4.2 or higher.

  • Access to OpenEye’s Python package server, Magpie. If you are a licensed Orion user and don’t have access, please contact OpenEye Support.

Follow the instructions there to configure access to Orion python packages via your pip.conf file.

Setup

  1. Create and activate a new Anaconda environment:

    conda create -n ocli_dev python=3.7
    .
    .
    conda activate ocli_dev
    
  2. Using the new conda environment, install the Orion Platform package, including the command line interface ocli, by running:

    pip install openeye-orionplatform
    
  3. Run:

    ocli packages init
    
  4. This will generate a directory with the name you provided as the project_slug when invoking the command. Switch into the directory:

    cd <project_slug>
    
  5. Next install the package and the development requirements:

    pip install -e .
    pip install -r requirements_dev.txt
    

Commands

Once all dependencies are installed, you can build the package for upload to Orion (the tar.gz will be in the dist directory):

python setup.py package
ocli packages upload dist/<package-filename>.tar.gz

Tests are set up for each of the floes included, they can be run locally:

python setup.py test-all

Command to just test cubes:

python setup.py test-cubes

Command to just test floes:

python setup.py test-floes

To clean up generated documentation and packaging files, run:

python setup.py clean

Output Skeleton

The following directory structure will be created by the template generator, the items marked in {{ }} will be replaced by your choices upon completion:

{{project_slug}}/       <-- Top directory for your Project.
├── MANIFEST.in
├── README.md                        <-- README with your Project Name and description.
├── docs/                            <-- Docs subdirectory set up for automatic documentation of cubes and floes.
│    ├── Makefile
│    ├── make.bat
│    └── source
│        ├── conf.py
│        └── index.rst
├── floes/                           <-- Subdirectory where all floes should be placed.
│    └── myfloe.py                   <-- An example floe.
├── manifest.json                    <-- Manifest for Orion.
├── requirements_dev.txt             <-- Requirements file for developers of this package.
├── setup.py                         <-- Python file for creating a python package
├── tests/                           <-- Subdirectory for testing of cubes and floes.
│    ├── test_mycube.py              <-- An example unit test for the included cube.
│    └── floe_tests/                 <-- Subdirectory for floe tests.
│        └── test_myfloe.py          <-- An example unit test for the included floe.
└── {{ module_name}}/    <-- Subdirectory of the package for the python module. All cubes should go in here.
    ├── __init__.py
    └── mycube.py                    <-- An example cube.

Package Documentation

The provided documentation files are included as a convenience, to provide users with a starting point. They are incomplete, however, and will not build correctly as-is. If you wish to create full sphinx documentation, including auto generated references for cubes and floes, please consider using the floe-pkg-tools library.

In the package’s manifest.json, specify: documentation_root: <RELATIVE_PATH_TO_DOCS> The root directory specified must contain an index.html file.

Example manifest.json contents:

{
    "name": "package_name",
    "requirements": "requirements.txt",
    "python_version": "3.7",
    "version": "0.1.0",
    "documentation_root": "docs"
}

Where docs is the relative path to a directory named docs/ which must contain an index.html file that acts as the starting point for the package’s documentation alongside the rest of your documentation files.

Directions for Sphinx are beyond the scope of this section, but the system is well designed for Python programmers and is adequately documented. For an introduction, see Sphinx.

Creating a Package Image (Advanced)

If using the template (manifest-based) package is not flexible enough for your needs, the alternative, for those with Docker experience, is to create a Docker image.

The following commands assume that you are in an existing package directory (for example, the package you created in the previous section “Creating a New Orion Package”).

Prerequisites

To get started, you’ll need the following (at a minimum):

Overview

The following diagram illustrates the environment:

  Package Image Layers

+--------------------------------+
|         Cubes and Floes        |
+--------------------------------+
|       Conda Environment        |
+--------------------------------+
|        Orion Base Image        |
+--------------------------------+
|   Linux Distribution (Amazon   |
|      Linux, Ubuntu, etc.)      |
+--------------------------------+

A Docker image consists of file system layers. When a container runs the image, the layers become merged into a single view of the filesystem. Orion package images are structured in layers as shown above.

Images are portable. They can be imported and exported as binary files from Docker (see docker save, docker load). Those files can also be uploaded & downloaded from Orion. An image prepared on a developer’s machine and containing cubes & floes can be uploaded to Orion, where the image can be used to execute those cubes & floes without any modification.

Uploading Docker images to Orion instead of floe packages has several benefits: * The package environment & code is executed without modification. What you test & upload is what you ship. * Private packages can be used without uploading them to magpie, or making them visible to users. * Compiled binaries, system packages, binary data, etc. can be bundled into the image. * Package ingestion is faster for packages with complex dependencies. * Package portability is improved as fully prepared images cannot fail to process in the future due to changes in conda/pip/etc.

Creating an image from an existing floe

1. Download a base image

  • First, select a base image os from the available choices. To see available choices:

    $ ocli packages list-base-images
    ubuntu-14.04
    ubuntu-18.04
    ubuntu-20.04
    amazonlinux1
    amazonlinux2
    
  • Trigger an image export job in Orion and download the resulting file:

    $ ocli packages export-base-image amazonlinux2 --download
    

2. Import base image file into Docker

  • Load the image into Docker:

    $ docker load -i orion-base-amazonlinux2-v10.0
    Loaded image ID: sha256:01da4f8f9748b3ac6cf5d265152fb80b9d7545075be8aa0a3d60770a98db9768
    
  • Tag the image using the SHA from the previous step:

    $ docker tag sha256:01da4f8f9748b3ac6cf5d265152fb80b9d7545075be8aa0a3d60770a98db9768 orion-base-amazonlinux2-v10.0
    
Note: The version included in the orion-base image name (v10.0 as shown above) is used to control how Orion expects the image to be structured.

This version may change after an Orion release. Export jobs always export the current/latest version, so tag the image accordingly.

3. Create a conda environment.yml

A command for converting a manifest.json into a conda environment.yml file is provided for convenience. The environment.yml file is required by the included Dockerfile and used to construct a Conda environment. Running this requires pyyaml be added to your environment as a dependency (https://pypi.org/project/PyYAML/):

$ ocli packages create-environment-yaml -o environment.yml ./manifest.json

Sample environment.yml file:

dependencies:
  - python=3.9
  - pip:
    - '# Developer should handle tightening pinning of OpenEye-snowball version'
    - OpenEye-orionplatform<5.0.0,>=4.0.0
    - OpenEye-snowball
name: user_env
Note: For details about Conda environment files,

see: https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file

4. Build a package image from a base image

The Dockerfile contains all of the commands necessary to build a working Conda environment and add the package source code to the image.

For convenience, a Dockerfile can be generated:

$ ocli packages show-dockerfile -o Dockerfile
Note: This dockerfile is provided for reference, but using it is not required.

If the check-image command succeeds, then the image should work in Orion. For further detail on package requirements, run ocli packages show-packaging-help

Customize as necessary, then build the dockerfile. Typical costomizations might include installing package/compilation dependencies, or installing binary executables not available from conda/pip:

$ DOCKER_BUILDKIT=1 docker build --secret id=pip.conf,src=<path_to_pop.conf> --secret id=condatoken,src=<path_to_condatoken> -t mypackage-mypackageversion .

Enabling Docker BuildKit provides support for the --secret flag. The -t argument will apply a tag to the resulting image. The tag is not required, but it’s a Docker image best practice.

Note: depending on OS, a file holding the conda token might need to be created manually.

To check if this file already exists use the following command find $HOME -name condatoken To create this file (assuming the token is configured in conda) conda token list | sed "s/.*\ //" > $HOME/condatoken

5. Export a prepared image to a file

Save the image as a file:

$ docker save mypackage-mypackageversion -o mypackage-mypackageversion

Upload the image file to Orion & trigger inspection:

$ ocli packages upload-package-image mypackage-mypackageversion