React+Koa Implementation File Upload Example

  • 2021-11-01 23:27:26
  • OfStack

Directory background
Server-side dependency
Backend configuration cross-domain backend configuration static resource access using koa-static-cache backend configuration requst body parse using koa-bodyparser front-end dependency
Normal file upload
Back end
Front end
Large file upload
Front end
Back end
Upload
Merge
Breakpoint transmission
File identical judgment
Summarize

Background

Recently, when I finished writing, I involved 1 file upload function, including common file upload, big file upload, breakpoint and so on

Server-side dependency

koa (node. js Framework) koa-router (Koa routing) koa-body (Koa body parsing middleware, which can be used to parse post request content) koa-static-cache (Koa static resource middleware for processing static resource requests) koa-bodyparser (parsing the contents of request. body)

Backend configuration cross-domain


app.use(async (ctx, next) => {
 ctx.set('Access-Control-Allow-Origin', '*');
 ctx.set(
  'Access-Control-Allow-Headers',
  'Content-Type, Content-Length, Authorization, Accept, X-Requested-With , yourHeaderFeild',
 );
 ctx.set('Access-Control-Allow-Methods', 'PUT, POST, GET, DELETE, OPTIONS');
 if (ctx.method == 'OPTIONS') {
  ctx.body = 200;
 } else {
  await next();
 }
});

Backend configuration static resource access uses koa-static-cache


//  Static resource processing 
app.use(
 KoaStaticCache('./pulbic', {
  prefix: '/public',
  dynamic: true,
  gzip: true,
 }),
);

Backend configuration requst body parse uses koa-bodyparser


const bodyParser = require('koa-bodyparser');
app.use(bodyParser());

Front-end dependency

React Antd axios

Normal file upload

Back end

The backend only needs to configure options with koa-body, and pass in router. post ('url', middleware, callback) as middleware

Back-end code


 //  Upload configuration 
const uploadOptions = {
//  Support file format 
 multipart: true,
 formidable: {
  //  Upload directory   Upload it directly here public Folder for easy access   Remember to add after the folder /
  uploadDir: path.join(__dirname, '../../pulbic/'),
  //  Preserve file extension 
  keepExtensions: true,
 },
};
router.post('/upload', new KoaBody(uploadOptions), (ctx, next) => {
 //  Get the uploaded file 
 const file = ctx.request.files.file;
 const fileName = file.path.split('/')[file.path.split('/').length-1];
 ctx.body = {
   code:0,
   data:{
    url:`public/${fileName}`
   },
   message:'success'

 }
});

Front end

I use the formData delivery method here, and the front end passes through < input type='file'/ > To access the file selector, the selected file is retrieved through the onChange event e. target. files [0], and then the file formData. append ('file', targetFile) that the FormData object will retrieve is created

Front end code


   const Upload = () => {
   const [url, setUrl] = useState<string>('')
   const handleClickUpload = () => {
     const fileLoader = document.querySelector('#btnFile') as HTMLInputElement;
     if (isNil(fileLoader)) {
       return;
     }
     fileLoader.click();
   }
   const handleUpload = async (e: any) => {
     // Get uploaded file 
     const file = e.target.files[0];
     const formData = new FormData()
     formData.append('file', file);
     //  Upload a file 
     const { data } = await uploadSmallFile(formData);
     console.log(data.url);
     setUrl(`${baseURL}${data.url}`);
   }
   return (
     <div>
       <input type="file" id="btnFile" onChange={handleUpload} style={{ display: 'none' }} />
       <Button onClick={handleClickUpload}> Upload small files </Button>
       <img src={url} />
     </div>
   )
 }

Other alternative methods

input+form Set aciton of form as the back-end page, enctype= "multipart/form-data", type= 'post' The compatibility of reading file data for uploading using fileReader is not particularly good

Large file upload

When the file is uploaded, Because the file is too large, Lead to request timeout, at this time can take the way of fragmentation, simply is to split the file into a small block, sent to the server, these small blocks identify which one file belongs to which one location, after all the small blocks are passed, the back end executes merge to merge these files into a complete file, and complete the whole transmission process

Front end

Get the file and the previous one, so I won't repeat it again Set the default slice size, file slice, each slice name is filename. index. ext, recursive request until the whole file is sent. Request merge

  const handleUploadLarge = async (e: any) => {
     // Get uploaded file 
     const file = e.target.files[0];
     //  For file fragmentation 
     await uploadEveryChunk(file, 0);
   }
   const uploadEveryChunk = (
     file: File,
     index: number,
   ) => {
     console.log(index);
     const chunkSize = 512; //  Slice width 
     // [  Filename ,  File suffix  ]
     const [fname, fext] = file.name.split('.');
     //  Gets the starting byte of the current slice 
     const start = index * chunkSize;
     if (start > file.size) {
       //  When the file size is exceeded, stop recursive uploading 
       return mergeLargeFile(file.name);
     }
     const blob = file.slice(start, start + chunkSize);
     //  Name each slice 
     const blobName = `${fname}.${index}.${fext}`;
     const blobFile = new File([blob], blobName);
     const formData = new FormData();
     formData.append('file', blobFile);
     uploadLargeFile(formData).then((res) => {
       //  Recursive slice upload 
       uploadEveryChunk(file, ++index);
     });
   };

Back end

The back end needs to provide two interfaces

Upload

Store every 1 block uploaded to the folder corresponding to name for later merging


const uploadStencilPreviewOptions = {
multipart: true,
formidable: {
 uploadDir: path.resolve(__dirname, '../../temp/'), //  File storage address 
 keepExtensions: true,
 maxFieldsSize: 2 * 1024 * 1024,
},
};

router.post('/upload_chunk', new KoaBody(uploadStencilPreviewOptions), async (ctx) => {
try {
 const file = ctx.request.files.file;
 // [ name, index, ext ] -  Split file name 
 const fileNameArr = file.name.split('.');

 const UPLOAD_DIR = path.resolve(__dirname, '../../temp');
 //  Directory for storing slices 
 const chunkDir = `${UPLOAD_DIR}/${fileNameArr[0]}`;
 if (!fse.existsSync(chunkDir)) {
  //  Create a directory without a directory 
  //  Create a temporary directory for large files 
  await fse.mkdirs(chunkDir);
 }
 //  Original file name .index -  The specific address and name of each fragment 
 const dPath = path.join(chunkDir, fileNameArr[1]);

 //  Transform the slice file from the  temp  Move to the temporary directory where the large files are uploaded this time 
 await fse.move(file.path, dPath, { overwrite: true });
 ctx.body = {
  code: 0,
  message: ' File uploaded successfully ',
 };
} catch (e) {
 ctx.body = {
  code: -1,
  message: ` File upload failed :${e.toString()}`,
 };
}
});

Merge

According to the merge request sent from the front end, the name carried goes to the folder of the temporary cache large file block to find the folder belonging to the name. After reading the chunks according to the order of index, merge the files fse. appendFileSync (path, data) (add the write in order to merge), and then delete the temporary stored folder to release the memory space


router.post('/merge_chunk', async (ctx) => {
 try {
  const { fileName } = ctx.request.body;
  const fname = fileName.split('.')[0];
  const TEMP_DIR = path.resolve(__dirname, '../../temp');
  const static_preview_url = '/public/previews';
  const STORAGE_DIR = path.resolve(__dirname, `../..${static_preview_url}`);
  const chunkDir = path.join(TEMP_DIR, fname);
  const chunks = await fse.readdir(chunkDir);
  chunks
   .sort((a, b) => a - b)
   .map((chunkPath) => {
    //  Merge files 
    fse.appendFileSync(
     path.join(STORAGE_DIR, fileName),
     fse.readFileSync(`${chunkDir}/${chunkPath}`),
    );
   });
  //  Delete temporary folder 
  fse.removeSync(chunkDir);
  //  Image accessed url
  const url = `http://${ctx.request.header.host}${static_preview_url}/${fileName}`;
  ctx.body = {
   code: 0,
   data: { url },
   message: 'success',
  };
 } catch (e) {
  ctx.body = { code: -1, message: ` Merge failed :${e.toString()}` };
 }
});

Breakpoint transmission

During the transmission of large files, if the page refresh or temporary failure leads to transmission failure, it is very bad for the user's experience to transfer them from scratch. Therefore, it is necessary to mark the position where the transmission fails, and transmit it directly here for the next time. I take the way of reading and writing in localStorage


  const handleUploadLarge = async (e: any) => {
    // Get uploaded file 
    const file = e.target.files[0];
    const record = JSON.parse(localStorage.getItem('uploadRecord') as any);
    if (!isNil(record)) {
      //  For the convenience of display, the collision problem is not considered first ,  Determine whether the file is the same 1 You can use hash The way of file 
      //  For large files, you can use hash(1 Block file + Documents size) To determine whether the two files are the same 
      if(record.name === file.name){
        return await uploadEveryChunk(file, record.index);
      }
    }
    //  For file fragmentation 
    await uploadEveryChunk(file, 0);
  }
  const uploadEveryChunk = (
    file: File,
    index: number,
  ) => {
    const chunkSize = 512; //  Slice width 
    // [  Filename ,  File suffix  ]
    const [fname, fext] = file.name.split('.');
    //  Gets the starting byte of the current slice 
    const start = index * chunkSize;
    if (start > file.size) {
      //  When the file size is exceeded, stop recursive uploading 
      return mergeLargeFile(file.name).then(()=>{
        //  Delete records after successful merge 
        localStorage.removeItem('uploadRecord')
      });
    }
    const blob = file.slice(start, start + chunkSize);
    //  Name each slice 
    const blobName = `${fname}.${index}.${fext}`;
    const blobFile = new File([blob], blobName);
    const formData = new FormData();
    formData.append('file', blobFile);
    uploadLargeFile(formData).then((res) => {
      //  Successful transmission per 1 Record position after return of block 
      localStorage.setItem('uploadRecord',JSON.stringify({
        name:file.name,
        index:index+1
      }))
      //  Recursive slice upload 
      uploadEveryChunk(file, ++index);
    });
  };

File identical judgment

By calculating the file MD5, hash, etc. When the file is too large, hash may take a lot of time. Take 1 block of file chunk and file size hash, carry on local sampling comparison, here show through crypto-js library to calculate md5, FileReader read file code


//  Calculation md5  See if it already exists 
   const sign = tempFile.slice(0, 512);
   const signFile = new File(
    [sign, (tempFile.size as unknown) as BlobPart],
    '',
   );
   const reader = new FileReader();
   reader.onload = function (event) {
    const binary = event?.target?.result;
    const md5 = binary && CryptoJs.MD5(binary as string).toString();
    const record = localStorage.getItem('upLoadMD5');
    if (isNil(md5)) {
     const file = blobToFile(blob, `${getRandomFileName()}.png`);
     return uploadPreview(file, 0, md5);
    }
    const file = blobToFile(blob, `${md5}.png`);
    if (isNil(record)) {
     //  Direct ab initio transmission   Record this md5
     return uploadPreview(file, 0, md5);
    }
    const recordObj = JSON.parse(record);
    if (recordObj.md5 == md5) {
     //  Transfer from the recording position 
     // Breakpoint transmission 
     return uploadPreview(file, recordObj.index, md5);
    }
    return uploadPreview(file, 0, md5);
   };
   reader.readAsBinaryString(signFile);

Summarize

I didn't know much about uploading files before. Through this function, I have a preliminary understanding of the front and back code of uploading files. Maybe these methods are only options and don't include all of them. I hope I can improve them continuously in my future study.
For the first time, I wrote a blog in Nuggets. After participating in the internship, I found that my knowledge was insufficient. I hope that I can sort out my knowledge system and record my learning process by insisting on writing a blog. I also hope that all the great gods will not hesitate to give advice when they find problems, thx

Above is the React+Koa implementation file upload example details, more about React+Koa implementation file upload information please pay attention to other related articles on this site!


Related articles: