Video merging with NextJS

Eugene Musebe

Introduction

This Mediajam demonstrates how to merge two videos using nextjs.

Codesandbox

The final demo on Codesandbox.

You can get the project GitHub repo using Github.

Prerequisites

Entry-level javascript and React/Nextjs knowledge.

Setting Up the Sample Project

Use the command npx create-next-app videomerge to create a new Next.js project and head to the directory using cd videomerge

Download the necessary dependencies:

npm install cloudinary

We will begin by setting up our backend. Our backend will involve Cloudinary integration for our media file upload. .

Use this Link to create your cloudinary account and log into it. Use your dashboard to access your environment variables.

In your project root directory, create a new file named .env. Paste the following. Fill the blanks with your environment variables from the cloudinary dashboard.

1CLOUDINARY_CLOUD_NAME =
2
3CLOUDINARY_API_KEY =
4
5CLOUDINARY_API_SECRET=

Restart your project using npm run dev.

In the pages/api folder, create a new file named upload.js. We will use this for our backend integration.

Start by configuring the environment keys and libraries. This avoids code duplication.

1var cloudinary = require("cloudinary").v2;
2
3cloudinary.config({
4 cloud_name: process.env.CLOUDINARY_NAME,
5 api_key: process.env.CLOUDINARY_API_KEY,
6 api_secret: process.env.CLOUDINARY_API_SECRET,
7});

Create a handler function to execute the POST request:

1export default async function handler(req, res) {
2 if (req.method === "POST") {
3 let url = ""
4 try {
5 let fileStr = req.body.data;
6 const uploadedResponse = await cloudinary.uploader.upload_large(
7 fileStr,
8 {
9 resource_type: "video",
10 chunk_size: 6000000,
11 }
12 );
13 url = uploadedResponse.url
14 } catch (error) {
15 res.status(500).json({ error: "Something wrong" });
16 }
17
18 res.status(200).json({data: url});
19 }
20}

The function above receives media data from the frontend and uploads it to cloudinary. It also captures the media file's cloudinary link and stores it in the "url" variable. This variable is finally sent back to the front end as a response

This concludes our backend. Let us now merge our videos.

Front End

To merge two videos in nextjs, we will, of course, require two sample videos one of which for simplified demonstration will have to contain a major single colorfill, for example, a unique colored background like a green screen video. This move allows us to easily replace the green color with frames from the second video. For this article, we will use the two samples for our foreground and background respectively. Paste the videos.

In your pages/index.js, start by importing the following hooks.

1import { useState, useRef, useEffect } from "react";

In the root function, we'll start by declaring our variables. We will use the criteria below:

i. foreground - we reference this to the foreground video DOM element ii). background - well create a video element to play our background. iii). canvas - we will merge our videos into this canvas iV). context - this variable captures the foreground video frame as an image and pass use the drawImage method to pass its video size v). temporaryCanvas - we use this to extract each frame like in the foreground vi). temporaryContext - captures background video size like in the first video. vi). link-state hook to contain the backend response link. v). blob - state hook that will store chunks of the processed blob for video upload.

Use the code below to implement the functions above

1let foreground, background, canvas, context, temporaryCanvas, temporaryContext;
2 const canvasRef = useRef();
3 const [link, setLink] = useState("");
4 const [blob, setBlob] = useState();

Start by creating the video element and canvas in the root function return statement.

1return (
2 <div>
3 <div className="container">
4 <div className="header">
5 <h1 className="heading">
6 <span onClick={computeFrame} className="heading-primary-main">
7 <b>Merge videos with nextjs</b>
8 </span>
9 </h1>
10 </div>
11 </div>
12 <div className="row">
13 <div className="column">
14 <video
15 className="video"
16 crossOrigin="Anonymous"
17 src="videos/foreground.mp4"
18 id="video"
19 width="800"
20 height="450"
21 autoPlay
22 muted
23 loop
24 type="video/mp4"
25 />
26 </div>
27 <div className="column">
28 {link ? (
29 <a href={link}>LINK : {link}</a>
30 ) : (
31 <h3>your link will show here...</h3>
32 )}
33 <canvas
34 className="canvas"
35 ref={canvasRef}
36 id="output-canvas"
37 width="800"
38 height="450"
39 ></canvas>
40 <br />
41 <a
42 href="#"
43 className="btn btn-white btn-animated"
44 onClick={uploadHandler}
45 >
46 Get video Link
47 </a>
48 </div>
49 </div>
50 </div>
51 );

we will wrap our functions around a useEffect hook so the videos start processing when the page renders. Inside the hook, start by referencing the video element and canvas.

1foreground = document.getElementById("video");
2canvas = document.getElementById("output-canvas");
3context = canvas.getContext("2d");

create a video element for the background and let it play and loop in muted condition

1background = document.createElement("video");
2 background.setAttribute("width", 800);
3 background.setAttribute("height", 450);
4 background.src = "videos/background.mp4";
5 background.muted = true;
6 background.autoplay = true;
7 background.play();
8 background.loop = true;

create the temporary canvas and reference its context. then play the foreground using the event listener as you run the compute frame function.

1temporaryCanvas = document.createElement("canvas");
2 temporaryCanvas.setAttribute("width", 800);
3 temporaryCanvas.setAttribute("height", 450);
4 temporaryContext = temporaryCanvas.getContext("2d");
5 foreground.addEventListener("play", computeFrame);

your useEffect should look as follows:

1useEffect(() => {
2 foreground = document.getElementById("video");
3 canvas = document.getElementById("output-canvas");
4 context = canvas.getContext("2d");
5
6
7 background = document.createElement("video");
8 background.setAttribute("width", 800);
9 background.setAttribute("height", 450);
10 background.src = "videos/background.mp4";
11 background.muted = true;
12 background.autoplay = true;
13 background.play();
14 background.loop = true;
15
16
17 temporaryCanvas = document.createElement("canvas");
18 temporaryCanvas.setAttribute("width", 800);
19 temporaryCanvas.setAttribute("height", 450);
20 temporaryContext = temporaryCanvas.getContext("2d");
21 foreground.addEventListener("play", computeFrame);
22 }, []);

In the computeFrame function, we will start by directly putting the image data into the output canvas.

1temporaryContext.drawImage(foreground, 0, 0, foreground.width, foreground.height);
2 let frame = temporaryContext.getImageData(0, 0, foreground.width, foreground.height);

Do the same for the background

1temporaryContext.drawImage(background, 0, 0, background.width, background.height);
2 let frame2 = temporaryContext.getImageData(
3 0,
4 0,
5 background.width,
6 background.height
7 );

The image data we created above is in a single array format which begins with the first row's pixel followed by the next in the same row. It then begins with the same procedure with the next row until the entire image is covered. There are 4 pixels in each data. The first three are the RGB values and the last one is known as alpha. There will be 4 array spaces contained in each pixel. The final array size will be 4 times the actual pixel number.

Create a loop that checks all the RGB pixel values and multiply each pixel value by 4. We will use 0 as an offset for R which is the index value for all pixels. G and B will need an offset of 1 and 2 respectively.

1for (let i = 0; i < frame.data.length / 4; i++) {
2 let r = frame.data[i * 4 + 0];
3 let g = frame.data[i * 4 + 1];
4 let b = frame.data[i * 4 + 2];
5 }

We will then use an if statement to check each pixel close to the green color and replace its RGB value with a second video. That should merge the green screen background with the second video. In our we demo we showcase spider man in Paris like below:

With our videos merged, we can capture the processed canvas using a media stream to chunks of blob and pass it to the blob state hook created

Eugene Musebe

Software Developer

I’m a full-stack software developer, content creator, and tech community builder based in Nairobi, Kenya. I am addicted to learning new technologies and loves working with like-minded people.