-
Notifications
You must be signed in to change notification settings - Fork 10
Pipeline YAML Format
This page describes the pipeline YAML support implemented to support IBM Cloud DevOps open toolchains. It is used in the Deploy to IBM Cloud Button (D2IC) flow to enable creation of samples that contain pipelines using Docker container extensions. The feature is currently only available for hosted git (Git Repos and Issue Tracking) and GitHub projects targeting IBM Cloud.
If a pipeline yaml file is present in the toolchain template repository or in a D2IC sample, a pipeline based on that description will be created (and the first stage executed) as part of the D2IC flow. The name of the yaml file is pipeline.yml and it must be located in a folder called .bluemix: .bluemix pipeline.yml
The yaml format is a single yaml document containing a pipeline specification. Here is the pipeline.yml to build and deploy nodejs application to IBM Cloud
---
defaultBaseImageVersion: latest
properties: []
private_worker: ${PRIVATE_WORKER}
stages:
- name: Build Stage
worker: ${PRIVATE_WORKER}
inputs:
- type: git
branch: ${BRANCH}
service: ${REPO}
jobs:
- name: Build
type: builder
artifact_dir: ''
- name: Deploy Stage
inputs:
- type: job
stage: Build Stage
job: Build
triggers:
- type: stage
jobs:
- name: Deploy
type: deployer
curatedDockerImage: default
target:
region_id: ${CF_REGION_ID}
organization: ${CF_ORGANIZATION}
space: ${CF_SPACE}
application: ${CF_APP}
script: |-
#!/bin/bash
cf push "${CF_APP}"
# View logs
# cf logs "${CF_APP}" --recent
The above yaml contains the references to environment variables (enclosed with ${}) which are defined as a part of configuration in the Pipeline service definition of toolchain yml . For the above pipeline yml, the env. variables PRIVATE_WORKER, BRANCH, REPO, CF_APP, CF_SPACE, CF_ORGANIZATION, CF_REGION_ID are defined in the env
property of pipeline service as shown below:
services:
repo:
service_id: hostedgit
parameters:
repo_name: '{{toolchain.name}}'
repo_url: '{{repository}}'
type: clone
has_issues: true
enable_traceability: true
build:
service_id: pipeline
parameters:
services:
- repo
- private_worker
name: '{{toolchain.name}}'
ui-pipeline: true
configuration:
content:
$text: pipeline.yml
env:
REPO: repo
BRANCH: '{{branch}}'
PRIVATE_WORKER: '{{services.private-worker.parameters.name}}'
CF_APP: '{{form.pipeline.parameters.app-name}}'
CF_SPACE: '{{space}}'
CF_ORGANIZATION: '{{organization}}'
CF_REGION_ID: '{{region}}'
execute: true
Before creating the pipeline, the env. variables are resolved and substituted values will be used in the configuration. Further, these variables are available during job execution and can be referenced in scripts.
In the above snippet, the "REPO" environment variable references the repository service, which defines where we want to host the repos either Github or GRIT. The pipeline service definition also contains the other parameters required to add pipeline to the toolchain.
For more information: https://console.bluemix.net/docs/services/ContinuousDelivery/pipeline_deploy_var.html#deliverypipeline_environment
---
defaultBaseImageVersion: '1.0' | '2.0' | 'latest' ; define pipeline base image version
properties:
<sequence of pipeline config properties>
private_worker: <private worker service name>
stages:
<sequence of stages>
name: <name>
[worker: <private_worker name>]
[inputs:
<sequence of inputs>]
[triggers:
<sequence of triggers>]
[properties:
<sequence of properties>]
[jobs:
<sequence of jobs>]
type: 'git' | 'job'
[branch: <branch name>] ; only for git inputs
[service: <repo>] ; GIT/GRIT repo service. referenced through environment variable
stage: <stage name> ; only for job inputs
job: <job name> ; only for job inputs
type: 'commit' | 'git' | 'stage' ; 'commit' will trigger when a git push event occurs, 'stage' will trigger when the preceding stage completes, 'git' triggers when a matching 'event' occurs
[events: 'push' | 'pull_request' | 'pull_request_closed'] ; for the 'git' type, the event further determines whether to trigger on a commit push, a pull request change, or a pull request closed. They can be combined as a space separated list. For example 'push pull_request' will trigger for either a 'push' or 'pull_request'.
[enabled: 'true' | 'false'] ; true is assumed if not specified, false will set stage run manually
name: <property name>
value: <property value>
[type: 'text' | 'secure' | 'text_area' | 'file'] ; text is assumed if not specified
[name: <job name>]
type: 'builder' | 'deployer' | 'tester'
fail_stage: 'true' | 'false'
[curatedDockerImage: <job image version>]
[extension_id: <extension id>] ; extension jobs only
[working_dir: <working dir path>] ; builder and tester only
[artifact_dir: <artifact path>] ; builder only
[build_type: <build type>] ; builder only
[script: <script>] ; not for extension jobs
[enable_tests: 'true' | 'false'] ; builder and tester only, some extensions may have support
[test_file_pattern: <pattern>] ; builder and tester only, some extensions may have support
[coverage_type: 'cobertura' | 'jacoco' | 'istanbul'] ; builder and tester only, some extensions may have support
[coverage_directory: <path to directory>] ; builder and tester only, some extensions may have support
[coverage_file_pattern: <pattern>] ; builder and tester only, some extensions may have support
[target: <target>] ; deployer and extension jobs only
*[<extension property name> : <value>] ; extension jobs only
* [docker_image] ; build_type is customimage only, will define the name of the custom docker image
region_id: <region id>
organization: <organization name>
space: <space name>
[application: <application name>]
[api_key: <api key>] ; deployer only, requires kubernetes_cluster or kubernetes_cluster_id
[api_key_id: <api key UUID>] ; deployer only, requires kubernetes_cluster or kubernetes_cluster_id
[kubernetes_cluster: <cluster name>] ; deployer only, requires api_key or api_key_id of an api key for the account this cluster is in
[kubernetes_cluster_id: <cluster id>] ; deployer only, requires api_key or api_key_id of an api key for the account this cluster is in
Before using information from this site, please see the information on the Home page.