android adopts FFmpeg to realize audio and video synthesis and separation

  • 2021-10-27 08:55:32
  • OfStack

The last article talked about audio cutting, mixing, splicing and transcoding, and also introduced cMake configuration and import of FFmpeg files in detail: android uses FFmpeg for audio mixing and splicing cutting. Now we will discuss the synthesis and separation of audio and video.

1. Audio extraction

To extract audio from multimedia files, the key command is "-acodec copy-vn", where "-acodec copy" is to copy audio stream with audio encoder, and "-vn" is to remove video video stream:


 /**
   *  Use ffmpeg Command line to extract audio 
   * @param srcFile  Original document 
   * @param targetFile  Object file 
   * @return  Extracted audio file 
 */
  public static String[] extractAudio(String srcFile, String targetFile){
    //-vn:video not
    String mixAudioCmd = "ffmpeg -i %s -acodec copy -vn %s";
    mixAudioCmd = String.format(mixAudioCmd, srcFile, targetFile);
    return mixAudioCmd.split(" ");// Split into string arrays with spaces 
  }

2. Video extraction

To extract video from multimedia files, the key command is "-vcodec copy-an", where "-ES24copy" is to copy video stream with video encoder, and "-an" is to remove audio audio stream:


/**
   *  Use ffmpeg Command line to extract video 
   * @param srcFile  Original document 
   * @param targetFile  Object file 
   * @return  Extracted video file 
*/
  public static String[] extractVideo(String srcFile, String targetFile){
    //-an audio not
    String mixAudioCmd = "ffmpeg -i %s -vcodec copy -an %s";
    mixAudioCmd = String.format(mixAudioCmd, srcFile, targetFile);
    return mixAudioCmd.split(" ");// Split into string arrays with spaces 
  }

3. Audio and video synthesis

To synthesize audio and video files into multimedia files, the key command is "-i% s-i% s-t", which represents the input time of audio, video and files respectively. It should be noted that if the original video file contains audio, the individual video stream is extracted first, and then the independent audio and video are used for synthesis:


  /**
   *  Use ffmpeg Command line audio and video synthesis 
   * @param videoFile  Video file 
   * @param audioFile  Audio file 
   * @param duration  Video duration 
   * @param muxFile  Object file 
   * @return  Synthesized file 
   */
  @SuppressLint("DefaultLocale")
  public static String[] mediaMux(String videoFile, String audioFile, int duration, String muxFile){
    //-t: Duration   If the audio and video duration is ignored, the "-t %d" Remove 
    String mixAudioCmd = "ffmpeg -i %s -i %s -t %d %s";
    mixAudioCmd = String.format(mixAudioCmd, videoFile, audioFile, duration, muxFile);
    return mixAudioCmd.split(" ");// Split into string arrays with spaces 
  }

After the individual video is extracted, audio and video synthesis is carried out:


public void handleMessage(Message msg) {
      super.handleMessage(msg);
      if(msg.what == 100){
        String audioFile = PATH + File.separator + "tiger.mp3";//tiger.mp3
        String muxFile = PATH + File.separator + "media-mux.mp4";
 
        try {
          // Use MediaPlayer Get video duration 
          MediaPlayer mediaPlayer = new MediaPlayer();
          mediaPlayer.setDataSource(videoFile);
          mediaPlayer.prepare();
          // Unit is ms
          int videoDuration = mediaPlayer.getDuration()/1000;
          Log.i(TAG, "videoDuration=" + videoDuration);
          mediaPlayer.release();
          // Use MediaMetadataRetriever Get audio duration 
          MediaMetadataRetriever mediaRetriever = new MediaMetadataRetriever();
          mediaRetriever.setDataSource(audioFile);
          // Unit is ms
          String duration = mediaRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
          int audioDuration = (int)(Long.parseLong(duration)/1000);
          Log.i(TAG, "audioDuration=" + audioDuration);
          mediaRetriever.release();
          // If the video duration is longer than the audio duration, use the audio duration; Otherwise, use the video duration 
          int mDuration = Math.min(audioDuration, videoDuration);
          // Composition using pure video and audio 
          String[] commandLine = FFmpegUtil.mediaMux(temp, audioFile, mDuration, muxFile);
          executeFFmpegCmd(commandLine);
          isMux = false;
        } catch (Exception e) {
          e.printStackTrace();
        }
      }
    }

After splicing the FFmpeg command, call the native method to execute:


/**
   *  Call ffmpeg Processing audio and video 
   * @param handleType handleType
   */
  private void doHandleMedia(int handleType){
    String[] commandLine = null;
    switch (handleType){
      case 0:// Audio and video synthesis 
        try {
          // Video files have audio , First, extract the pure video file 
          commandLine = FFmpegUtil.extractVideo(videoFile, temp);
          isMux = true;
        } catch (Exception e) {
          e.printStackTrace();
        }
        break;
      case 1:// Extract audio 
        String extractAudio = PATH + File.separator + "extractAudio.aac";
        commandLine = FFmpegUtil.extractAudio(srcFile, extractAudio);
        break;
      case 2:// Extract video 
        String extractVideo = PATH + File.separator + "extractVideo.mp4";
        commandLine = FFmpegUtil.extractVideo(srcFile, extractVideo);
        break;
      default:
        break;
    }
    executeFFmpegCmd(commandLine);
  }
FFmpeg Callbacks performed: 
/**
   *  Execute ffmpeg Command line 
   * @param commandLine commandLine
   */
  private void executeFFmpegCmd(final String[] commandLine){
    if(commandLine == null){
      return;
    }
    FFmpegCmd.execute(commandLine, new FFmpegCmd.OnHandleListener() {
      @Override
      public void onBegin() {
        Log.i(TAG, "handle media onBegin...");
      }
 
      @Override
      public void onEnd(int result) {
        Log.i(TAG, "handle media onEnd...");
        if(isMux){
          mHandler.obtainMessage(100).sendToTarget();
        }else {
          runOnUiThread(new Runnable() {
            @Override
            public void run() {
              Toast.makeText(MediaHandleActivity.this, "handle media finish...", Toast.LENGTH_SHORT).show();
            }
          });
        }
      }
    });
  }

Ok, the introduction of audio and video synthesis and separation using FFmpeg is finished. If you have any questions or suggestions, please communicate.

Source code: Link address. If it is helpful to you, please contact fork and star.


Related articles: