-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add enhancement for OVA storage improvements #1302
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Sam Lucidi <[email protected]>
Quality Gate passedIssues Measures |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1302 +/- ##
==========================================
- Coverage 15.50% 15.48% -0.02%
==========================================
Files 112 112
Lines 23377 23377
==========================================
- Hits 3624 3621 -3
- Misses 19466 19468 +2
- Partials 287 288 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
|
||
1. Allow specifying any PVC as the storage backing for an OVA provider, rather than requiring | ||
the coordinates of an NFS share. | ||
2. Expose an endpoint on the OVA provider server that permits uploading appliances to the catalogue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we will start support uploading of the OVA, we will need to create the PVC via storageclass or user pointing us to the PVC.
Is there some preexisting solution for such upload? I know Kubevirt CDI allows directly populating PVCs so maybe we could use somthing similar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The design I'm proposing is that the user provides a single prepared PVC by reference, which is mounted by the provider server pod. This serves as the long term storage for all of the OVA appliances. I don't think CDI's image upload feature is going to apply here, since what we want to support (ultimately) is drag-and-drop upload through the UI. That would require an HTTP endpoint to accept the file, and it would be stored in the provided PVC. Maybe I'm missing a possibility though.
|
||
### Security, Risks, and Mitigations | ||
|
||
The upload component of this enhancement permits uploading and deleting arbitrary files from a volume in the cluster, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would we upload to one volume or each OVA have its own volume?
Might be problematic with the size requiremets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One volume. The end user will need to ensure the volume they allocate has space enough for the OVAs they want to store. (This is already the case with the NFS share, which Forklift creates a PV from).
|
||
### Inventorying remote appliances | ||
|
||
An alternative to storing appliances locally is to store the URLs of appliances, and then use nbdkit to remotely examine |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah nbdkit would be another alternative, do you think these features go against each other?
In my mind we could do both features, depending on the time.
The problem with the nbdkit is that the customer would still need to setup some HTTP server, where as what this enhancement describes would allow users to easily upload directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the two approaches are necessarily mutually exclusive; both could be supported. But I believe that remotely inventorying appliances requires larger design changes that would be better approached in a follow up feature.
* The overhead of transferring the appliance image across the network into the cluster is incurred | ||
each time a VM is created from the appliance instead of once when it is added to the catalogue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah multiple ova imports, interesting and good point!
Won't it cause problems with names duplication or some duplications?
Thinking if the Kubevirt could have something like that.
Maybe we would just import onc have a template VM and then clone it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is most likely that users will want to import appliance images more than once, so we need to consider an approach to automatically generate names for VMs imported from an appliance image. KubeVirt has VM clone functionality, but it requires a KubeVirt feature gate to be enabled and requires the storage drivers to support snapshotting.
OVA Provider Storage Improvements
Summary
Allow specifying any PVC as the appliance catalogue backing for an OVA provider (rather than requiring
the coordinates of an NFS share) and provide a way for the end user to upload appliances to
the catalogue.
Motivation
The current implementation of the OVA provider in Forklift is difficult to use. Users
must specify the coordinates to an NFS share that contains appliances, and adding appliances
requires the user to add them to the NFS share outside the Forklift user experience.
Kubernetes/OpenShift users generally expect to think of storage in terms of PVs/PVCs, so
having to provide an NFS share rather than a PVC (which could be backed by NFS) is
not aligned with user expectations. Allowing the user to specify a PVC to back the OVA
provider would be much more flexible and better match user expectations, and providing
UI to allow appliance upload would remove friction.
Goals
the coordinates of an NFS share.
Proposal
User Stories
Story 1
As a Forklift administrator, I want to configure an OVA provider to use an arbitrary volume as
the backing for the appliance catalogue.
Story 2
As a Forklift user, I want to upload an appliance to the appliance catalogue.
Story 3
As a Forklift user, I want to see which appliances have been uploaded to the appliance catalogue.
Story 4
As a Forklift user, I want to remove an appliance from the appliance catalogue.
Story 5
As a Forklift administrator, I want to disable the appliance catalogue management endpoints.
Implementation Notes
Implementation can be approached in two stages.
Supporting Arbitrary PVCs
The provider controller currently expects an OVA provider resource to contain
the path to an NFS share in its
url
field. This URL is used to construct an NFS PVwhich is mounted by an OVA provider server that is created when the OVA provider resource is reconciled.
This field will be made optional for OVA providers, and the provider controller will be updated to look for a
pvc
key in thesettings
field which may contain the namespaced name of a PVC that will be mountedby the provider server.
The current provider controller implementation does not update the deployment for the OVA provider server
when an OVA provider resource is reconciled; the deployment is only created if it does not exist and deleted
when an OVA provider resource is removed. The provider controller needs to be adjusted to keep OVA provider server
deployments in sync with the storage configuration of the provider resource.
This will require corresponding UX changes to relax the requirement on the URL field and expose an interface for
setting the desired PVC.
Uploading OVAs
The OVA provider server is currently very simple, consisting of a single file using the built-in HTTP server to expose
endpoints for consumption by the Forklift inventory. The OVA provider server needs to be extended with endpoints for
uploading, listing, and removing appliances.
POST /appliances
Accepts a single OVA file to be stored in the appliance catalogue, returning a 409 if an appliance with that filename already exists.GET /appliances
Return JSON array containing metadata about the appliances in the catalog (size, upload date, etc)DELETE /appliances/:appliance
Remove an appliance from the catalogue, using OVA filename as the parameter.Due to this increase in complexity, the provider server should be refactored to use a more feature rich server
(preferably Gin which is used in Forklift for the inventory) and reorganized into multiple files. Special consideration
needs to be given to the upload endpoint to ensure that uploaded appliances are streamed to disk efficiently. Configuration
options should be provided to set the maximum accepted file size.
Security, Risks, and Mitigations
The upload component of this enhancement permits uploading and deleting arbitrary files from a volume in the cluster,
and those arbitrary uploads will be used in migration to construct VMs that will run in the cluster. This is an inherently
risky operation that is acceptable given the risk already inherent in migrating VMs into the cluster.
The provider server endpoints should be protected by auth, as is done for the inventory endpoints, and a configuration
option should be provided so that the administrator can disable the OVA upload/delete endpoints.
Design Details
Test Plan
Integration tests should verify that the OVA provider server deployment is reconfigured when the OVA provider's
storage configuration is changed, that providers configured with NFS shares continue to function, that
the catalogue management endpoints work, and that uploaded OVAs can be imported.
Upgrade / Downgrade Strategy
OVA providers that rely on NFS shares can continue to be supported until it becomes reasonable
to deprecate and remove direct consumption of NFS shares. The provider controller shall update
any existing OVA provider server deployments to ensure they refer to the correct version of
the image.
An OVA Provider resource that has been created with a reference to a PVC instead of an NFS share
will not be able to be inventoried or used in a plan if Forklift is downgraded to a prior version. The existing
version of Forklift does not update the OVA provider server deployment if the provider is changed, so
downgrading will require any OVA provider server deployments to be removed so that the provider controller
can recreate them with the correct version of the image.
Implementation History
Drawbacks
Allowing the end user to provide a PVC of their choice to use for the appliance catalogue rather than requiring an
NFS share is a purely beneficial improvement in usability and aligns better with Kube/OpenShift patterns.
Likewise, allowing upload of images into the store is a significant improvement in usability that comes at the minor
cost of a slightly more complex provider server to handle the file upload. There are no meaningful drawbacks to these
usability improvements.
Alternatives
Inventorying remote appliances
An alternative to storing appliances locally is to store the URLs of appliances, and then use nbdkit to remotely examine
the appliance so that it can be added to the inventory. Then the remote appliance disks could be imported directly to
VM PVCs without the intermediate step of storing the appliance in the cluster. This avoids the long term storage
overhead of having the full appliance in the provider catalogue, but it has several drawbacks that make it unsuitable.
each time a VM is created from the appliance instead of once when it is added to the catalogue.
For these reasons it is more practical to upload the appliance into the provider catalogue once and store it for future
use.