You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a configuration running several applications, some dbs, ect... inside Kubernetes. This is deployed with Rancher / Helm and we are using devspace to do local file sync / development work with those container images.
devspace dev throws the following error on their machines only, it works on my machine:
sh: 1: /usr/local/apache2/devspace_start.sh: not found
Devspace is able to fully complete creating the SSH session with the container, start the ssh server on the container, start and complete file sync, and then fails to find this file.
Console output from running devspace dev:
info Using kube context 'rancher-desktop'
deploy:app Deploying chart ../my-chart/ (my-chart) with helm...
deploy:app Deployed helm chart (Release revision: 1)
deploy:app Successfully deployed app with helm
Execute hook 'run-make-api' at after:deploy
Waiting for running containers for hook 'run-make-api'
Execute hook 'run-make-api' in container 'default/api-container-56c75c7d86-kjg6r/api-container'
dev:app Waiting for pod to become ready...
dev:app Selected pod api-container-devspace-75d5698c76-cv88q
dev:app ports Port forwarding started on: 33285 -> 80
dev:app sync Sync started on: ./ <-> ./
dev:app sync Waiting for initial sync to complete
dev:app sync Initial sync completed
dev:app proxy Port forwarding started on: 10444 <- 10567
dev:app ssh Port forwarding started on: 11789 -> 8022
dev:app ssh Use 'ssh app.api-container.devspace' to connect via SSH
dev:app term Opening shell to api-container:api-container-devspace-75d5698c76-cv88q (pod:container)
sh: 1: ./devspace_start.sh: not found
What did you expect to happen instead?
The path in question that I've specified in devspace.yaml (seen below) changes per project but I've verified they are all correct by connecting via kubectl exec -- /bin/bash into the same container image and verifying the file is there and permissions look correct. I have also tried this with relative paths ./devspace_start.sh but the issue persisted on their machines despite both ways working for me.
ls -la inside the container directory in question shows:
root@api-container-74895f9fff-f8z5l:/usr/local/apache2# ls -la
total 60
drwxr-xr-x 1 www-data www-data 4096 May 30 17:18 .
drwxr-xr-x 1 root root 4096 May 14 02:55 ..
...
-rwxr-xr-x 1 root root 1741 May 30 17:16 devspace_start.sh
The file is definitely on the container...
I've had them check with other projects (with other paths i've verified) and they have the same issue. I've also checked the logs inside the .devspace folder in the local directory but only have errors related to networks being closed. Additionally, I did plenty of debugging related to the building of the docker image itself, including fully clearing the build cache and recreating it on their machines locally, but the issue persisted.
I'm pretty new to devspace and it seems great so far aside from this one major issue, so any help would be appreciated...
How can we reproduce the bug? (as minimally and precisely as possible)
I cannot reproduce the issue locally on my machine unfortunately.
My devspace.yaml:
version: v2beta1
name: api-container
vars:
IMAGE: api-container
pipelines:
dev:
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
ensure_pull_secrets --all # 2. Ensure pull secrets
create_deployments --all # 3. Deploy Helm charts and manifests specfied as "deployments"
start_dev app # 4. Start dev mode "app" (see "dev" section)
deploy:
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
ensure_pull_secrets --all # 2. Ensure pull secrets
build_images --all -t $(git describe --always) # 3. Build, tag (git commit hash) and push all images (see "images")
create_deployments --all # 4. Deploy Helm charts and manifests specfied as "deployments"
dev:
app:
imageSelector: ${IMAGE}
devImage: "api-container:0.1"
sync:
- path: dist:/usr/local/apache2/htdocs/
uploadExcludePaths:
- ./node_modules/
- ./bower_components/
- ./.devspace/
downloadExcludePaths:
- ./
terminal:
command: /usr/local/apache2/devspace_start.sh
ssh:
enabled: true
proxyCommands:
- command: devspace
- command: kubectl
- command: helm
- gitCredentials: true
ports:
- port: 80
Local Environment:
DevSpace Version: 6.3.12
Operating System: windows
ARCH of the OS: ARM64 Kubernetes Cluster:
Cloud Provider: N/A local, helm / rancher
Kubernetes Version: 1.29.3
Anything else we need to know?
The text was updated successfully, but these errors were encountered:
Hello! Is the devspace_start.sh included in the container build? Or should it be synced with your dev.sync configuration? I ask because the sync configuration doesn't seem like it would include the file at the path /usr/local/apache2/devspace_start.sh
sync:
- path: dist:/usr/local/apache2/htdocs/
Another thing to try, is after running kubectl exec -- /bin/bash and ls -la shows that the file exists, try executing it. Sometimes there's a shebang (#!/bin/zsh) or similar declaration that doesn't exist in a particular environment, and the error makes it appear as if it can't find the script, when it's really the shebang that's missing.
Hello! Is the devspace_start.sh included in the container build? Or should it be synced with your dev.sync configuration? I ask because the sync configuration doesn't seem like it would include the file at the path /usr/local/apache2/devspace_start.sh
sync:
- path: dist:/usr/local/apache2/htdocs/
Another thing to try, is after running kubectl exec -- /bin/bash and ls -la shows that the file exists, try executing it. Sometimes there's a shebang (#!/bin/zsh) or similar declaration that doesn't exist in a particular environment, and the error makes it appear as if it can't find the script, when it's really the shebang that's missing.
Hey Russell, thanks for the response. Trying to execute the .sh file is something I have not tried. Next time I am debugging one of the developer machines reproducing this issue, I will try that. I did modify the devspace_start.sh file from the devspace init command, so they do have #!/bin/bash at the top of the file.
To answer your other question, yes, the .sh file is included initially when the docker image is built as the last layer step in most cases of the Dockerfiles like so: COPY ./devspace_start.sh ./
What happened?
I have a configuration running several applications, some dbs, ect... inside Kubernetes. This is deployed with Rancher / Helm and we are using devspace to do local file sync / development work with those container images.
devspace dev
throws the following error on their machines only, it works on my machine:Devspace is able to fully complete creating the SSH session with the container, start the ssh server on the container, start and complete file sync, and then fails to find this file.
Console output from running
devspace dev
:What did you expect to happen instead?
The path in question that I've specified in
devspace.yaml
(seen below) changes per project but I've verified they are all correct by connecting viakubectl exec -- /bin/bash
into the same container image and verifying the file is there and permissions look correct. I have also tried this with relative paths./devspace_start.sh
but the issue persisted on their machines despite both ways working for me.ls -la
inside the container directory in question shows:The file is definitely on the container...
I've had them check with other projects (with other paths i've verified) and they have the same issue. I've also checked the logs inside the .devspace folder in the local directory but only have errors related to networks being closed. Additionally, I did plenty of debugging related to the building of the docker image itself, including fully clearing the build cache and recreating it on their machines locally, but the issue persisted.
I'm pretty new to devspace and it seems great so far aside from this one major issue, so any help would be appreciated...
How can we reproduce the bug? (as minimally and precisely as possible)
I cannot reproduce the issue locally on my machine unfortunately.
My devspace.yaml:
Local Environment:
Kubernetes Cluster:
Anything else we need to know?
The text was updated successfully, but these errors were encountered: