Creating a New Orion Package

The easiest way to start a new package containing cubes and floes is to generate a basic template using the ocli ocli packages init command. This template provides a consistent starting point for creating cubes and floes. It also sets up helpful tools for running tests.

After you have followed the directions in this section, see Orion Integration for more details on packaging, linting and detection, hardware requirements, parallelism, scheduling, and other runtime considerations.

Features

The ocli packages init command:

  • Sets up a skeleton of an Orion package containing cubes and floes.

  • Provides simple example cube and floe.

  • Supports testing setup using PyTest including working tests for the example cube and floe.

  • Provides a setup.py file with commands to run tests or build the package.

  • Provides version configuration via storing the version only in the module’s __init__.py file.

Requirements

  • Python 3.7 - 3.9 (For your specific platform, see Prerequisites.) We recommend starting with a clean conda environment.

  • orionplatform_library version 4.3.1 or higher.

  • Access to OpenEye’s Python package server, Magpie. If you are a licensed Orion user and don’t have access, please contact OpenEye Support.

Follow the instructions there to configure access to Orion Python packages via your pip.conf file.

Setup

  1. Create and activate a new Anaconda environment:

    conda create -n ocli_dev python=3.9 -y
    .
    .
    conda activate ocli_dev
    
  2. Using the new conda environment, install the Orion Platform package, including the command line interface ocli, by running:

    pip install openeye-orionplatform
    
  3. Run:

    ocli packages init
    
  4. This will generate a directory with the name you provided as the project_slug when invoking the command. Switch into the directory:

    cd <project_slug>
    
  5. Next install the package and the development requirements:

    pip install -e .
    pip install -r requirements_dev.txt
    

Commands

Once all dependencies are installed, you can build the package for upload to Orion (the tar.gz will be in the dist directory):

python setup.py package
ocli packages upload dist/<package-filename>.tar.gz

Tests are set up for each of the floes included, they can be run locally:

python setup.py test-all

Command to just test cubes:

python setup.py test-cubes

Command to just test floes:

python setup.py test-floes

To clean up generated documentation and packaging files, run:

python setup.py clean

Output Skeleton

The following directory structure will be created by the template generator, the items marked in {{ }} will be replaced by your choices upon completion:

{{project_slug}}/       <-- Top directory for your Project.
├── MANIFEST.in
├── README.md                        <-- README with your Project Name and description.
├── docs/                            <-- Docs subdirectory set up for automatic documentation of cubes and floes.
│    ├── Makefile
│    ├── make.bat
│    └── source
│        ├── conf.py
│        └── index.rst
├── floes/                           <-- Subdirectory where all floes should be placed.
│    └── myfloe.py                   <-- An example floe.
├── manifest.json                    <-- Manifest for Orion.
├── orion-requirements.txt           <-- Package dependencies
├── requirements_dev.txt             <-- Requirements file for developers of this package.
├── setup.py                         <-- Python file for creating a python package
├── tests/                           <-- Subdirectory for testing of cubes and floes.
│    ├── test_mycube.py              <-- An example unit test for the included cube.
│    └── floe_tests/                 <-- Subdirectory for floe tests.
│        └── test_myfloe.py          <-- An example unit test for the included floe.
└── {{ module_name}}/    <-- Subdirectory of the package for the python module. All cubes should go in here.
    ├── __init__.py
    └── mycube.py                    <-- An example cube.

Note

For more details on package structure and linting, see the floe documentation

Package Documentation

The provided documentation files are included as a convenience, to provide users with a starting point. They are incomplete, however, and will not build correctly as-is. If you wish to create full sphinx documentation, including auto generated references for cubes and floes, please consider using the floe-pkg-tools library.

In the package’s manifest.json, specify: documentation_root: <RELATIVE_PATH_TO_DOCS> The root directory specified must contain an index.html file.

Example manifest.json contents:

{
    "name": "package_name",
    "requirements": "requirements.txt",
    "python_version": "3.7",
    "version": "0.1.0",
    "documentation_root": "docs"
}

Where docs is the relative path to a directory named docs/ which must contain an index.html file that acts as the starting point for the package’s documentation alongside the rest of your documentation files.

Directions for Sphinx are beyond the scope of this section, but the system is well designed for Python programmers and is adequately documented. For an introduction, see Sphinx.

Creating a Package Image (Advanced)

If using the template (manifest-based) package is not flexible enough for your needs, the alternative, for those with Docker experience, is to create a Docker image.

Prerequisites

To get started, you’ll need the following (at a minimum):

Overview

The following diagram illustrates the environment:

  Package Image Layers

+--------------------------------+
|         Cubes and Floes        |
+--------------------------------+
|       Conda Environment        |
+--------------------------------+
|        Orion Base Image        |
+--------------------------------+
|   Linux Distribution (Amazon   |
|      Linux, Ubuntu, etc.)      |
+--------------------------------+

A Docker image consists of file system layers. When a container runs the image, the layers become merged into a single view of the filesystem. Orion package images are structured in layers as shown above.

Images are portable. They can be imported and exported as binary files from Docker (see docker save, docker load). Those files can also be uploaded and downloaded from Orion. An image prepared on a developer’s machine and containing cubes and floes can be uploaded to Orion, where the image can be used to execute those cubes and floes without any modification. When working with a Docker image rather than the normal floe package, all dependencies are downloaded on the developer machine during the build process.

Uploading Docker images to Orion instead of floe packages has several benefits:

  • The package environment and code is executed without modification. What you test and upload is what you ship.

  • Private packages can be used without uploading them to magpie, or making them visible to users.

  • Compiled binaries, system packages, binary data, etc. can be bundled into the image.

  • Package ingestion is faster for packages with complex dependencies.

  • Package portability is improved as fully prepared images cannot fail to process in the future due to changes in conda/pip/etc.

Creating an image from an existing floe

This walkthrough assumes you already went through the steps to create a basic package. If not, this is the minimum:

$ ocli packages init

1. Download a base image

  • First, select a base image os from the available choices. To see available choices:

    $ ocli packages list-base-images
    ubuntu-14.04
    ubuntu-18.04
    ubuntu-20.04
    amazonlinux1
    amazonlinux2
    
  • Trigger an image export job in Orion and download the resulting file:

    $ ocli packages export-base-image amazonlinux2 --download
    

Note

Don’t do this in your package source directory. You will want to save this file to avoid repeatedly having to download it.

2. Import base image file into Docker

  • Load the image into Docker:

    $ docker load -i orion-base-amazonlinux2-v11.0
    Loaded image ID: sha256:01da4f8f9748b3ac6cf5d265152fb80b9d7545075be8aa0a3d60770a98db9768
    
  • Tag the image using the SHA from the previous step:

    $ docker tag sha256:01da4f8f9748b3ac6cf5d265152fb80b9d7545075be8aa0a3d60770a98db9768 orion-base-amazonlinux2-v11.0
    

Note

The version included in the orion-base image name (v11.0 as shown above) is used to control how Orion expects the image to be structured. This version may change after an Orion release. Export jobs always export the current/latest version, so tag the image accordingly.

3. Create a conda environment.yml

A command for converting a manifest.json into a conda environment.yml file is provided for convenience. The environment.yml file is required by the included Dockerfile and used to construct a Conda environment. Running this requires PyYAML be added to your environment as a dependency.:

$ cd <my-package-source-directory>
$ ocli packages create-environment-yaml -o environment.yml ./manifest.json

Sample environment.yml file:

dependencies:
  - python=3.9
  - pip:
    - OpenEye-orionplatform<5.0.0,>=4.3.0
name: user_env

Note

For details about Conda environment files, see: Conda environment

4. Create a Dockerfile and .dockerignore

The Dockerfile contains all of the commands necessary to build a working Conda environment and add the package source code to the image.

For convenience, a Dockerfile can be generated:

$ ocli packages show-dockerfile -o Dockerfile

Note

This dockerfile is provided for reference, feel free to write your own or customize as necessary. Typical customizations might include installing runtime dependencies, or installing binary executables not available from conda/pip. If the check-image command succeeds, then the image should work in Orion. For further detail on package requirements, run ocli packages show-packaging-help.

The package process for a normal Orion floe package excludes tests and other development files from the tarfile that gets uploaded. For details, see MANIFEST.in or try building and examining the contents of the package created by python setup.py package. Similarly, a .dockerignore file is the typical way of ensuring that unneeded files are not included in an image when using Docker.

Create a file named .dockerignore and containing these lines:

*.pyc
requirements_dev.txt
dist
tests
docs

Note

Since MANIFEST.in is not being used during the Docker build process, it may be deleted to avoid confusion. Alternatively, the existing Python build process could be utilized as the first of two steps in building an image, for example, by first running python setup.py package && cd dist && tar xf ./<my-package>.tar.gz and then copying from the dist directory rather than the source directory.

5. Build the Docker image

Enabling Docker BuildKit provides support for the --secret flag. Any file passed in this way will only be available for lines in the Dockerfile which reference the secret in this way --mount=type=secret,id=condatoken,dst=/home/floeuser/condatoken; the secret will not exist on the image in any of its layers.

Build the docker image. The -t argument will tag the final image. Although this is not required, it is considered Docker best practice:

$ DOCKER_BUILDKIT=1 docker build --secret id=pip.conf,src=<path_to_pip.conf> --secret id=condatoken,src=<path_to_condatoken> -t mypackage-mypackageversion .

Note

Depending on OS, you may need to find or create pip.conf and condatoken files before running this command. Depending on OS, the typical location of pip.conf is $HOME/.config/pip/pip.conf A file holding the conda token might need to be created manually. To check if this file already exists use the following command find $HOME -name condatoken To create this file (assuming the token is configured in conda) conda token list | sed "s/.*\ //" > $HOME/condatoken

6. Export the image and upload to Orion

Save the image as a file:

$ docker save mypackage-mypackageversion -o mypackage-mypackageversion

Upload the image file to Orion and trigger inspection:

$ ocli packages upload-package-image mypackage-mypackageversion

7. Upload the image using a docker registry

As an alternative to locally saving and uploading the image, Orion also supports using a docker registry.

Push the image to a docker registry:

$ docker image push my-registry/my-package-repo

Pull the image from a docker registry and trigger inspection:

$ ocli packages import-from-registry --registry-url my-registry/my-package-repo --image-tag my-package-v1 --pull-username myusername --secret mysecret-id

Note

Docker special cases registries on Docker Hub. Instead of using the full URL for the registry, only use the suffix. In the example above, my-registry/my-package-repo gets expanded (in Docker) to https://hub.docker.com/repository/docker/my-registry/my-package-repo. For any non-dockerhub registry, the full URL must be used.

The --secret argument is the ID of an Orion secret containing the password for the image registry. Create a secret in Orion:

$ ocli secrets create mysecret --value <password-for-image-registry>

Floe Package Image Requirements

This section describes the requirements for Docker images uploaded as Floe Packages.

User

The image must have a configured Linux user with username “floeuser” and user ID 502.

File System Layout

Docker images for Floe Packages require a few additional files to be present. The recommended layout, which works with the standard Dockerfile example, is as follows:

All files and directories below should be owned by floeuser:

/home/floeuser                                       # Home directory for the user which executes package code
/home/floeuser/miniconda/envs/user_env/              # Should contain a conda environment
/home/floeuser/miniconda/envs/user_env/bin/python    # Python interpreter used for executing package code
/home/floeuser/.pip/pip.conf                         # Not strictly required, but must be present for conda to use if pip dependencies are present
/package/<package name>/                             # Top level package source directory
                    <typical package files...>

Dockerfile Environment Variables

The following environment variables should be setup as follows. Note that PATH, LD_LIBRARY_PATH, and PYTHONPATH may be extended to include other paths beyond what is specified in this section.

Note

The syntax below is for Dockerfiles.

Required for all packages:

ENV PATH=/home/floeuser/miniconda/bin:/runtime/bin:$PATH
ENV LD_LIBRARY_PATH=/runtime/lib64:$LD_LIBRARY_PATH
ENV PYTHONPATH=/package/<package name>:$PYTHONPATH

Required for GPU enabled packages:

ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:$LD_LIBRARY_PATH
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:$PATH
ENV NVIDIA_VISIBLE_DEVICES=all
LABEL com.nvidia.volumes.needed="nvidia_driver"