-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed on connection exception #1
Comments
Hi @SudhanshuBlaze It's a known issue in Spark, at least in the version this repo is using (I'll update it in the future). Have you tried the steps listed in the FAQ? Please, try running inside the master node the following commands:
Let me know if that worked for you 😄 |
Hi, Yes it worked for me, I had to combine your solution with this answer I found on StackOverflow which was to update my
|
Thanks! I'll add it to the default conf! |
Hi @Genarito I just realized I am facing another issue. When I go to port 9870 to see the HDFS WebUI, in the Live Nodes section it shows 0 Live Nodes, however I have 3 docker containers running as worker nodes. What could be the possible solution to this problem? I have attached the screenshot for your reference. My OS:
|
Hi @SudhanshuBlaze, I'm finishing my PhD thesis, and when I do I'll get around to updating the technologies in this repository and improving the configuration to reduce the problems mentioned. I will leave this issue open to follow up on the problem in the future. |
My recommendation would be to create subnet when creating docker network in swarm
Also while running the container add hosts:
|
Hi
I was able to follow your instructions, but I am getting this error on this step:
hdfs dfs -copyFromLocal ./test.txt /test.txt
Command used to run the container:
docker container run --rm -v hdfs_master_data_swarm:/home/hadoop/data/nameNode jwaresolutions/big-data-cluster: /usr/local/hadoop/bin/hadoop namenode -format
Output from master node:
Output from my worker node:
Other info which might be helpful for you:
The text was updated successfully, but these errors were encountered: