Tuesday, January 10, 2012

How to setup password-less ssh to the slaves?

For setting up Hadoop on a cluster of machines, the master should be able to do a password-less ssh to start the daemons on all the slaves.

Class MR - master starts TaskTracker and the DataNode on all the slaves.

MRv2 (next generation MR) - master starts NodeManager and the DataNode on all the slaves.

Here are the steps to setup password-ssh. Ensure that port 22 is open on all the slave (`telnet slave-hostname 22` should connect).

1) Install openssh-client on the master
sudo apt-get install openssh-client
2) Install openssh-server on all the slaves
sudo apt-get install openssh-server
3) Generate the ssh key
ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa
4) Copy the key to all the slaves (replace username appropriately as the user starting the Hadoop daemons). Will be prompted for the password.
ssh-copy-id -i $HOME/.ssh/id_rsa.pub username@slave-hostname
5) If the master also acts a slave (`ssh localhost` should work without a password)
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys 

If hdfs/mapreduce are run as different users then the steps (3,4 and 5) have to be repeated for all the users.

How to test ?

1) Run `ssh user@slave-hostname`. It should get connected without prompting for a password.


  1. it works ! thanks a lot ....!

  2. I followed these steps in the same way but it still prompts me for the password.

    1. Yes.
      Just above steps are nt enough.Evn i have same issueget
      I dont know wht happend

      chaitu -> namenodes username is datanodes ip
      i dont know how they both conneted ?
      chaitu@'s password: Permission denied, please try again.

      chaitu@'s password: Permission denied, please try again.

  3. Thanks a lot praveen. It helped me a lot. I got strucked with this for a long time.
    You helped me to solve this in a very well way.
    Thanks again and keep post useful posts of Hadoop Eco System

  4. Thanks a lot, this really helps...