Using Next.js to Remove Video Background Colors

Eugene Musebe

CLOUDINARY CHROMA KEYING

Introduction

This article demonstrates how a video's green screen color can be filtered out.

Codesandbox

The final project can be viewed on Codesandbox.

You can get the GitHub source code here Github

Prerequisites

Prior entry-level understanding of javascript and react

Project Setup

In your project directory create a new Nextjs app using create-next-app CLI :

npx create-next-app chromaKeying

Head to the directory cd chromaKeying

You can use npm run dev to run the nextjs server locally. Delete all the contents in the pages/index directory and replace them with the following

1import Processor from '../components/Processor';
2export default function Home() {
3 return (
4 <div>
5 <Processor />
6 </div>
7 )

In the code above, our root component contains a function named Home which imports a component named Processor, imported from the directory components/Processor. You will build this directory by creating a folder named components and inside it create a new file named Processor.jsx. in the Processor.jsx file start by creating a functional component as shown below

1export default function Processor() {
2 return (
3 <div>
4 works
5 </div>
6 )

This should be enough to run the browser with the page looking as shown:

BACKEND

With the components set up, we'll continue by first setting up the project backend. The backend will handle the app's cloudinary online storage and provide the uploaded video's cloudinary link.

We start by configuring cloudinary environment variables in our application. Head to the official cloudinary website. The cloudinary website contains a free tier which can be accessed through this link. Sign up and login to access your account dashboard. An example of a dashboard will be as follows .

The three environment variables will be the Cloud name, API Key, and API Secret. To use them, head back to your project root directory and create a file named .env. Inside it paste the following code

1CLOUDINARY_NAME =
2
3CLOUDINARY_API_KEY =
4
5CLOUDINARY_API_SECRET=

Fill in the blank spaces with your respective environment variables and restart your project.

With our keys set up, head to thepages/api folder and create a file named cloudinary.js. This is where we will code our upload function.

Start by pasting the following code

1var cloudinary = require('cloudinary').v2;
2
3cloudinary.config({
4 cloud_name: process.env.CLOUDINARY_NAME,
5 api_key: process.env.CLOUDINARY_API_KEY,
6 api_secret: process.env.CLOUDINARY_API_SECRET,
7});

The above code simply configures the cloudinary envs to our component. Below it paste the following function

1export default async function handler(req, res) {
2 let uploaded_url = '';
3 const fileStr = req.body.data;
4
5 if (req.method === 'POST') {
6 try {
7 const uploadedResponse = await cloudinary.uploader.upload_large(fileStr, {
8 resource_type: "video",
9 chunk_size: 6000000,
10 });
11 uploaded_url = uploadedResponse.secure_url;
12 console.log(uploaded_url)
13 } catch (error) {
14 console.log(error);
15 }
16 res.status(200).json({ data: uploaded_url });
17 console.log('complete!');
18 }
19}

The above is a nextjs handler function that will receive a request body from the frontend and upload it for online storage. On uploading, the code will capture the uploaded file's cloudinary url and assign it to the uploaded_url variable, which will be sent back to the front end as a response.

With the above handler function, our backend is complete!

FrontEnd

The front end is simply the part of our project involving direct user interaction.

Earlier, we had created our component and if you run the project you can still see the content from the Processor component.

Inside the components/Processor.jsx directory, start by pasting the necessary imports at the top of the page. We will only need one

1import { useState, useRef } from 'react';

Inside the Processor function, paste the following

1let video, canvas, outputContext, temporaryCanvas, temporaryContext;
2 const canvasRef = useRef();
3 const [computed, setComputed] = useState(false);
4 const [link, setLink] = useState('');

Each of the above variables will be understood as we move on.

Replace your function's return statement with the following code,

1<>
2 <header className="header">
3 <div className="text-box">
4 <h1 className="heading-primary">
5 <span className="heading-primary-main">Cloudinary Chroma Keying</span>
6 </h1>
7 <a href="#" className="btn btn-white btn-animated" onClick={computeFrame}>Remove Background</a>
8 </div>
9 </header>
10 <div className="row">
11 <div className="column">
12 <video className="video" crossOrigin="Anonymous" src='https://res.cloudinary.com/dogjmmett/video/upload/v1632221403/sample_mngu99.mp4' id='video' width='400' height='360' controls autoPlay muted loop type="video/mp4" />
13 </div>
14 <div className="column">
15 { link ? <a href={link}>LINK : {link}</a> : <h3>your link shows here...</h3>}
16 <canvas className="canvas" ref={canvasRef} id="output-canvas" width="500" height="360" ></canvas><br />
17 </div>
18 </div>
19 </>

The above code should simply create a UI that users will be interacting with. The UI will be like below .

If the page has not shown up, don't worry, that's because the REMOVE BACKGROUND button contains an onClick function named computeFrame. Let's create it. Above the return statement, paste the following

1function computeFrame(){
2
3}

Your UI should work by now.

Let us modify our computeFrame function. When this function is fired, the function should remove the green color from the video and maintain the foreground. Therefore start by pasting the following inside it

1video = document.getElementById("video")
2
3 temporaryCanvas = document.createElement("canvas");
4 temporaryCanvas.setAttribute("width", 800);
5 temporaryCanvas.setAttribute("height", 450);
6 temporaryContext = temporaryCanvas.getContext("2d");
7
8 canvas = document.getElementById("output-canvas");
9 outputContext = canvas.getContext("2d");

Above, we begin using the variables created earlier. the video variable references the video element in our return statement. We've then created a temporary canvas element and assigned it to the temporaryCanvas variable then configured its width and height property. We assign its context to the temporaryContext variable. The temporaryContext variable will be used to fetch the current video frame as an image and pass the video size and element using the drawImage method.

1temporaryContext.drawImage(video, 0, 0, video.width, video.height);
2 let frame = temporaryContext.getImageData(0, 0, video.width, video.height);

We can now remove the green screen background. Our sample image data is in a single-dimensional array format. This means that the image begins with the first row's pixel then followed by the next one in the same row. this happens over the following row repeatedly till the whole image is covered. There are 4 data in each pixel, 3 RGB values, and alpha transparency. That means four array spaces followed by an array size that is four times the original pixel number. We will create a loop that checks all pixels' RGB value and get the value for each pixel by multiplying the index by four and adding an offset. R, the first value for each pixel will get zero offsets while G and B will get 1 and 2 offsets respectively.

1for (let i = 0; i < frame.data.length / 4; i++) {
2 let r = frame.data[i * 4 + 0];
3 let g = frame.data[i * 4 + 1];
4 let b = frame.data[i * 4 + 2];
5
6 }

At this point, we can confirm each pixel's RGB value that resembles the green color and set their alpha value to zero which will remove the green screen. This will modify the code above as follows

1for (let i = 0; i < frame.data.length / 4; i++) {
2 let r = frame.data[i * 4 + 0];
3 let g = frame.data[i * 4 + 1];
4 let b = frame.data[i * 4 + 2];
5
6 if (r > 70 && r < 160 && g > 95 && g < 220 && b > 25 && b < 150) {
7 frame.data[i * 4 + 3] = 0;
8 }
9 }

Better results can be achieved through a more advanced algorithm but for our case, this should be enough. Use the setTimeout to recursively call itself and create a rendering loop.

1outputContext.putImageData(frame, 0, 0)
2 setTimeout(computeFrame, 0);
3 ```
4Your green screen should be able to disappear at this point.
5
6![GreenDcreen Removed](https://res.cloudinary.com/dogjmmett/image/upload/v1644218569/greenScreenRemoved_y56aos.png "GreenDcreen Removed")
7
8Next, we record our animated canvas as a WebM file using MediaRecorder API for cloudinary upload.
9
10Create a constant to upload recorded media chunks

const chunks = [];

1Create another to reference our canvas element using the useRef hook

const cnv = canvasRef.current;

1Grab the canvas MediaStream

const stream = cnv.captureStream()

1Initialize the recorder, and let it store data in our array each time the recorder has new data

const rec = new MediaRecorder(stream); rec.ondataavailable = e => chunks.push(e.data);

1A complete blob will be constructed when the recorder stops

rec.onstop = e => uploadHandler(new Blob(chunks, { type: 'video/webm' })); rec.start();

1you can also set the time the media stops recording. We shall stop our in 16 seconds

setTimeout(() => rec.stop(), 16000);

1In the above codes notice the `uploadHandler` function. If we run this project now, there will be an error `uploadHandler is to define`. We solve this by creating the function itself.

async function uploadHandler(){

}

1The function above will be used to send our WebM files to the backend for upload.
2Replace with its full code below

async function uploadHandler(blob) { await readFile(blob).then((encoded_file) => { try { fetch('/api/cloudinary', { method: 'POST', body: JSON.stringify({ data: encoded_file }), headers: { 'Content-Type': 'application/json' }, }) .then((response) => response.json()) .then((data) => { setComputed(true); setLink(data.data); }); } catch (error) { console.error(error); } }); }

1The function is an async function since it will involve an await expression to make the promise returning function act synchronous by suspending execution until the promise is fulfilled. The resolved value will be the await expression's return value. Use the following [link](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) for a better understanding of asynchronous functions. In our case, the await expression will use the blob prop from the `computeFrame` function. it will return a base64 encoded file awaited from a fileReader function. Paste the following to include your fileReader

function readFile(file) { return new Promise(function (resolve, reject) { let fr = new FileReader();

1fr.onload = function () {
2 resolve(fr.result);
3 };
4
5 fr.onerror = function () {
6 reject(fr);
7 };
8
9 fr.readAsDataURL(file);
10 });
11}
1A file reader allows us to asynchronously read our blob object contents. You can research file reader through this [link](https://developer.mozilla.org/en-US/docs/Web/API/FileReader).
2
3Back to our `uploadHandler` function, we will use a try-catch function to fetch our backend and the POST method to send our encoded file. Our response will be assigned to the earlier created `link` variable using a useState hook which will then be viewed in our front end in case the user wishes to download the generated video content.
4
5That's it! We have created our own Chroma Keying web application. Try it out to enjoy the experience
6
7Happy coding!

Eugene Musebe

Software Developer

I’m a full-stack software developer, content creator, and tech community builder based in Nairobi, Kenya. I am addicted to learning new technologies and loves working with like-minded people.