Java with HADOOP cluster file upload and download

  • 2020-04-01 03:43:34
  • OfStack

To upload and download files in HDFS is the basic operation of the cluster, in the guide to the authority of the HADOOP, upload and download the file, there is the instance of the code, but for how to configure HADOOP speaks the client is not very clear, after a long search and debugging, summary and how to use the method of cluster configuration, as well as their testing available for cluster file for operating procedures. First, the corresponding environment variables need to be configured:


hadoop_HOME="/home/work/tools/java/hadoop-client/hadoop"
for f in $hadoop_HOME/hadoop-*.jar; do
        hadoop_CLASSPATH=${hadoop_CLASSPATH}:$f
done
for f in $hadoop_HOME/lib/*.jar; do
        hadoop_CLASSPATH=${hadoop_CLASSPATH}:$f
done
hadoopvfs_HOME="/home/work/tools/java/hadoop-client/hadoop-vfs"
for f in $hadoopvfs_HOME/lib/*.jar; do
        hadoop_CLASSPATH=${hadoop_CLASSPATH}:$f
done
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/work/tools/java/hadoop-client/hadoop/lib/native/Linux-amd64-64/

Where LD_LIBRARY_PATH is the path of the library to be used when calling, and hadoop_CLASSPATH is the various jars in our hadoop client
One thing to note is that it is best not to use the variable HADOOP_HOME, which is an environment variable used by the system, and it is best not to conflict with it
Methods to compile classes:


javac -classpath $CLASSPATH:$hadoop_CLASSPATH HDFSUtil.java

Operation method:


java -classpath $CLASSPATH:$hadoop_CLASSPATH HDFSUtil

However, in the actual use process, will report an error such as No Permission, or you can ensure that the code is not wrong, will also report some strange error at run time
So the question is, what the hell is this?
Answer: this is because there is no configuration file configured for the corresponding cluster
Because in the HADOOP authority guide, the configuration is weakened, there are problems when it comes to using clusters. How to solve them is as follows:


this.conf = new Configuration(false);
conf.addResource("./hadoop-site.xml");
conf.addResource("./hadoop-default.xml");
conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());

Why is that? The book is simple:

Enclosing the conf = new Configuration ();
That's because by default your cluster is local, so you don't need to do the configuration, but in practice, the configuration of each cluster is different, so we'll introduce the configuration of the cluster
This is very important, because in practice we are all using HADOOP clients, and we are already in a cluster with the environment set up, so we need to do the local configuration
The files hadoop-site.xml and hadoop-default.xml are in the conf directory of the client they are using, and the directory is specified at the time of addResource

The above mentioned configuration, after all, the program can really run, so the configuration is very important.

The following is the corresponding tool code, interested to see it, using the way of file flow to do, this also can get through the FTP and HDFS between the file transfer:


import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import java.net.URL;
import java.io.*;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;

public class HDFSUtil {
  private String hdfs_node = "";
  private String hdfs_path = "";
  private String file_path = "";
  private String hadoop_site = "";
  private String hadoop_default = "";
  private Configuration conf = null;

  public HDFSUtil(String hdfs_node) {
    this.hdfs_node = hdfs_node;
  }

  public String getHdfsNode() {
    return this.hdfs_node;
  }

  public void setHdfsPath(String hdfs_path){
    this.hdfs_path = hdfs_path;
  }

  public String getHdfsPath(){
    return this.hdfs_path;
  }

  public void setFilePath(String file_path){
    this.file_path = file_path;
  }

  public String getFilePath(){
    return this.file_path;
  }

  public void setHadoopSite(String hadoop_site){
    this.hadoop_site = hadoop_site;
  }

  public String getHadoopSite(){
    return this.hadoop_site;
  }

  public void setHadoopDefault(String hadoop_default){
    this.hadoop_default = hadoop_default;
  }

  public String getHadoopDefault(){
    return this.hadoop_default;
  }

  public int setConfigure(boolean flag) {
    if (flag == false){
      if (this.getHadoopSite() == "" || this.getHadoopDefault() == ""){
        return -1;
      }
      else {
        this.conf = new Configuration(false);
        conf.addResource(this.getHadoopDefault());
        conf.addResource(this.getHadoopSite());
        conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
        conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
        return 0;
      }
    }
    this.conf = new Configuration();
    return 0;
  }

  public Configuration getConfigure() {
    return this.conf;
  }

  public int upLoad(String localName, String remoteName) throws FileNotFoundException, IOException {
    InputStream inStream = null;
    FileSystem fs = null;
    try{
      inStream = new BufferedInputStream(new FileInputStream(localName));
      fs = FileSystem.get(URI.create(this.hdfs_node), this.conf);
      OutputStream outStream = fs.create(new Path(remoteName) ,new Progressable() {
        public void progress(){
          System.out.print('.');
        }
      });

      IOUtils.copyBytes(inStream, outStream, 4096, true);
      inStream.close();
      return 0;
    } catch (IOException e){
      inStream.close();
      e.printStackTrace();
      return -1;
    }
  }

  public int upLoad(InputStream inStream, String remoteName) throws FileNotFoundException, IOException {
    FileSystem fs = null;
    try{
      fs = FileSystem.get(URI.create(this.hdfs_node), this.conf);
      OutputStream outStream = fs.create(new Path(remoteName) ,new Progressable() {
        public void progress(){
          System.out.print('.');
        }
      });

      IOUtils.copyBytes(inStream, outStream, 4096, true);
      inStream.close();
      return 0;
    } catch (IOException e){
      inStream.close();
      e.printStackTrace();
      return -1;
    }
  }

  public int donwLoad(String remoteName, String localName, int lines) throws FileNotFoundException, IOException {
    FileOutputStream fos = null;
    InputStreamReader isr = null;
    BufferedReader br = null;
    String str = null;
    OutputStreamWriter osw = null;
    BufferedWriter buffw = null;
    PrintWriter pw = null;
    FileSystem fs = null;
    InputStream inStream = null;
    try {
      fs = FileSystem.get(URI.create(this.hdfs_node + remoteName), this.conf);
      inStream = fs.open(new Path(this.hdfs_node + remoteName));
      fos = new FileOutputStream(localName);
      osw = new OutputStreamWriter(fos, "UTF-8");
      buffw = new BufferedWriter(osw);
      pw = new PrintWriter(buffw);
      isr = new InputStreamReader(inStream, "UTF-8");
      br = new BufferedReader(isr);
      while((str = br.readLine()) != null && lines > 0){
        lines--;
        pw.println(str);
      }
    } catch (IOException e){
      throw new IOException("Couldn't write.", e);
    } finally {
      pw.close();
      buffw.close();
      osw.close();
      fos.close();
      inStream.close()
    }
    return 0;
  }

  //main to test
  public static void main(String[] args){
    String hdfspath = null;
    String localname = null;
    String hdfsnode = null;
    int lines = 0;

    if (args.length == 4){
      hdfsnode = args[0];
      hdfspath = args[1];
      localname = args[2];
      lines = Integer.parseInt(args[3]);
    }
    else{
      hdfsnode = "hdfs://nj01-nanling-hdfs.dmop.baidu.com:54310";
      hdfspath = "/app/ps/spider/wdmqa/wangweilong/test/HDFSUtil.java";
      localname = "/home/work/workspace/project/dhc2-0/dhc/base/ftp/papapa";
      lines = 5;
    }
    HDFSUtil hdfsutil = new HDFSUtil(hdfsnode);
    hdfsutil.setFilePath(hdfsutil.getHdfsNode()+hdfspath);
    hdfsutil.setHadoopSite("./hadoop-site.xml");
    hdfsutil.setHadoopDefault("./hadoop-default.xml");
    hdfsutil.setConfigure(false);
    try {
      hdfsutil.donwLoad(hdfspath, localname, lines);
    } catch (IOException e){
      e.printStackTrace();
    }
  }

If you want to learn more about downloading files on FTP, please refer to this article:

(link: #)

If you want to connect FTP and HDFS files to each other, just create a class that calls the interface of the tools in the two articles, and write your own code that actually works.

The above is the entire content of this article, I hope to be able to help you master Java.

Please take a moment to share the article with your friends or leave a comment. We sincerely appreciate your support!


Related articles: