Video Skit Maker in Next.js - Video Transformations

Emma Alder

In the first part of this post, we discussed how to upload a video with an audio clip to make a skit (short video clip). In this tutorial, we will learn how to add further transformations to the video. The transformations will add text, a progress bar, blur, and crop to the video as desired.

You don’t need to read the first part of this tutorial to follow through with this one.

Sandbox

We completed the project on CodeSandbox; you can fork it to get started quickly.

1<CodeSandbox id="audio-video-transformation-6q9x2" title="Video Transformation"/>

GitHub repository: https://github.com/Olanetsoft/audio-video-transformation

Prerequisites & Installation

The knowledge of JavaScript and React.js is required to follow through with this article. Also, you require Node.js and its package manager NPM installed to proceed.

Verify you have Node installed using the following terminal command:

1node -v && npm -v

The above command should output their respective version number if installed.

Scaffolding a Next.js Project

In the first part of this post, we built a Next.js app that allowed us to upload videos and add an audio transformation using Cloudinary. In that part, we used the Cloudinary upload widget to upload media assets to Cloudinary. The component's state stores the returned asset’s Cloudinary public ID.

We will continue development on the first part’s Next.js app. You can fork it to proceed. https://codesandbox.io/s/video-and-audio-upload-with-nextjs-zvuo9

Alternatively, we can create a new Next.js project using the npx create-next-app command.

Once the app is initialized and the dependencies automatically installed, we will see a message with instructions for navigating to our site and running it locally. We do this with the command.

1cd <project name> && npm start

Next.js will start a hot-reloading development environment accessible by default at http://localhost:3000.

Installing Cloudinary

We will use the Cloudinary React package, a React.js library that helps us optimally render Cloudinary videos and handles video transformations.

We can use Cloudinary’s robust transformation features to modify the video distributed through an integrated content delivery network (CDN).

We install the cloudinary-react package in the project’s directory using npm with:

1npm i cloudinary-react

Adding video transformation controls

We will display the controls for the video transformations on the home page by updating the functional component, pages/index.js created in part one, to include the following code snippet in this GitHub Gist.

https://gist.github.com/Chuloo/420595bb7e083a6d9d9ec84f85e0aa85

https://gist.github.com/Chuloo/420595bb7e083a6d9d9ec84f85e0aa85

Here, we added form fields for:

  • A progress bar with a color of either red or blue
  • Video crop with two options of scale and crop
  • Custom text input
  • A blur effect on the video

The current user interface doesn’t look aesthetically pleasing; thus, we add some style with CSS. We update the CSS styles in /css/style.css to the following content in this GitHub Gist:

https://gist.github.com/Chuloo/33434d6ed74109bb032313c33652efff

https://gist.github.com/Chuloo/33434d6ed74109bb032313c33652efff

We imported this CSS file in the _app.js file in the pages directory of the project. If it isn’t imported or non-existent, we need to create an _app.js file inside the pages directory. This file is native to Next.js and wraps the whole application. We’ll import the CSS file we created into this _app.js file with:

1import "../css/style.css";
2 export default function MyApp({ Component, pageProps }) {
3 return <Component {...pageProps} />;
4 }

Our application should look like this on http://localhost:3000/.

Implement transformation functions

To handle video transformations, we need to create a component that handles the transformation depending on the props passed to the component. We create a components/ folder in the root directly and create a file video.js in the folder with the following content.

1import { CloudinaryContext, Transformation, Video } from "cloudinary-react";
2 const TransformVideo = ({video, audio}) => {
3 return (
4 <CloudinaryContext cloudName="olanetsoft">
5 <Video publicId={video} controls autoplay="true">
6 <Transformation overlay={`video:${audio}`} />
7 </Video>
8 </CloudinaryContext>
9 );
10 };
11 export default TransformVideo;

Here, we imported CloudinaryContext, a wrapper Cloudinary component used to manage shared information across all its children’s Cloudinary components. The rendered TransformVideo component takes data of video and audio as props. This video information (public ID) passed is sent to the video component. We set the audio data as a transformation on the video.

The above code block will render the uploaded video with the background audio when we import it into pages/index.js:

1import React, { useState } from "react";
2 import { Helmet } from "react-helmet";
3 import TransformVideo from "../components/video";
4 const App = () => {
5 const [videoPublicId, setVideoPublicId] = useState("");
6 const [alt, setAlt] = useState("");
7 const [audioPublicId, setAudioPublicId] = useState("");
8 const openWidget = () => {
9 // widget creation logic
10 const widget = window.cloudinary.createUploadWidget(
11 {
12 cloudName: "olanetsoft",
13 uploadPreset: "w42epls6"
14 },
15 (error, result) => {
16 if (result.event === "success") {
17 console.log(result.info);
18 if (result.info.is_audio === true) {
19 setAudioPublicId(result.info.public_id);
20 setAlt(`A file of ${result.info.original_filename}`);
21 } else {
22 setVideoPublicId(result.info.public_id);
23 setAlt(`A file of ${result.info.original_filename}`);
24 }
25 }
26 }
27 );
28 widget.open();
29 };
30 return (
31 <div>
32 <main className="App">
33 <section className="right-side">
34 <h1>The resulting video with audio will be displayed here</h1>
35 {videoPublicId && (
36 <TransformVideo
37 audio={audioPublicId}
38 video={videoPublicId}
39 />
40 )}
41 </section>
42 {/*JSX code in there including form fields and buttons*/}
43 </main>
44 </div>
45 );
46 };
47 export default App;

After importing the TransformVideo component and uploading a video, we should have had a video playing, which should look like this:

We'll update our TransformVideo component to accept props for various video transformation operations. Let’s start with changing the color of the progress bar. We update the TransformVideo component to include the color props.

1const TransformVideo = ({color, video, audio}) => {
2 return (
3 <CloudinaryContext cloudName="olanetsoft">
4 <Video publicId={video} controls autoplay="true">
5 <Transformation overlay={`video:${audio}`} />
6 <Transformation effect={`progressbar:bar:${color}:30`} />
7
8 {/*Add the color here*/}
9 <Transformation effect={`progressbar:bar:${color}:30`} />
10
11 </Video>
12 </CloudinaryContext>
13 );
14 };

In pages/index.js, we create a state variable to contain the selected color using the useState hook, with a default color of green.

1import React, { useState } from "react";
2
3 const App = () => {
4 const [color, setColor] = useState("green");
5 //...
6 }

We add an onChange event handler on the radio inputs rendered to update the color state variable once an option is selected.

1// ...
2 <input
3 type="radio"
4 value="blue"
5 name="color"
6 // Add the onChange attribute here
7 onChange={(event) => setColor(event.target.value)}
8 />
9 <label>Blue</label>
10 <input
11 type="radio"
12 value="red"
13 name="color"
14
15 // Add the onChange attribute here
16 onChange={(event) => setColor(event.target.value)}
17 />
18 <label>Red</label>
19 // ...

Next, we update the TransformVideo component rendered in pages/index.js to include the progress bar color information.

1import React, { useState } from "react";
2 import { Helmet } from "react-helmet";
3 import TransformVideo from "../components/video";
4 const App = () => {
5 // state variables and methond definitions go in here
6 return (
7 <div>
8 <main className="App">
9 {/*JSX rendered in here...*/}
10
11 <section className="right-side">
12 <h1>The resulting video with audio will be displayed here</h1>
13 {videoPublicId && (
14 <TransformVideo
15 color={color}
16 audio={audioPublicId}
17 video={videoPublicId}
18 />
19 )}
20 </section>
21 </main>
22 </div>
23 );
24 };
25 export default App;

Similarly, to add Cloudinary transformations including crop, text, and blur effect, we modify the TransformVideo component’s definition to accept the values as props and create the state variables in the pages/index.js.

We modify TransformVideo to:

1import { CloudinaryContext, Transformation, Video } from "cloudinary-react";
2
3 const TransformVideo = ({ crop, color, text, blur, audio, video }) => {
4 return (
5 <CloudinaryContext cloudName="olanetsoft">
6 <Video publicId={video} controls autoplay="true">
7 <Transformation overlay={`video:${audio}`} />
8 <Transformation effect={`progressbar:bar:${color}:30`} />
9
10 {/*Added more transformation effect below*/}
11 <Transformation
12 overlay={{
13 fontFamily: "arial",
14 fontSize: 60,
15 text
16 }}
17 endOffset="9.0"
18 gravity="south"
19 startOffset="2.0"
20 y="80"
21 />
22 <Transformation effect={`blur:${blur}`} crop={crop} />
23 <Transformation width="500" height="350" crop={crop} />
24
25 </Video>
26 </CloudinaryContext>
27 );
28 };
29 export default TransformVideo;

In pages/index.js, we add state variables to manage the blur, crop, and text transformations.

1import React, { useState } from "react";
2 import { Helmet } from "react-helmet";
3 import TransformVideo from "../components/video";
4 const App = () => {
5 const [videoPublicId, setVideoPublicId] = useState("");
6 const [alt, setAlt] = useState("");
7 const [audioPublicId, setAudioPublicId] = useState("");
8 const [textValue, setTextValue] = useState(" ");
9 const [color, setColor] = useState("green");
10 const [crop, setCrop] = useState("scale");
11 const [blur, setBlur] = useState("");
12 };
13 return (
14 {/*Returned JSX goes in here*/}
15 );
16 };
17 export default App;

We also set default state values for each variable.

For each transformation’s input element, we add an onChange event handler to update its state value.

1//...
2
3 <h3>Crop Video</h3>
4 <label className="label">Select Type</label>
5 <input
6 type="radio"
7 value="scale"
8 name="crop"
9
10 // Add the onChange attribute for crop transformation here
11 onChange={(event) => setCrop(event.target.value)}
12
13 />
14 <label>Scale</label>
15 <input
16 type="radio"
17 value="crop"
18 name="crop"
19
20 // Add the onChange attribut for crop transformation here
21 onChange={(event) => setCrop(event.target.value)}
22
23 />
24 <label>Crop</label>
25 <h3>Text</h3>
26 <label className="label">Add Text</label>
27 <input
28 id="text"
29 type="text"
30
31 // Add the onChange attribute for textValue effect here
32 onChange={(event) => setTextValue(event.target.value)}
33 />
34 <h3>Blur Effect</h3>
35 <label className="label">Adjust Blur Effect</label>
36 <input
37 type="number"
38
39 // Add the onChange attribute for blur transformation here
40 onChange={(event) => setBlur(event.target.value)}
41 />
42 </form>
43
44 // ...

Lastly, we pass props of the state data to the rendered TransformVideo with:

1// ...
2
3 <section className="right-side">
4 <TransformVideo
5 crop={crop}
6 color={color}
7 text={textValue}
8 blur={blur}
9 //...
10 />
11 </section>
12
13 // ...

You can see what the the final App.js file looks like in this GitHub Gist. This includes audio/video upload and video transformation.

https://gist.github.com/Chuloo/36d3ddcb643452f01530197b8a49c8d1

With this, we complete our application development, and it looks like this:

Conclusion

This article discussed how to add multiple transformations to a video file, completing an app that lets you upload a video, having an audio file playing in the background while displaying multiple transformations. Furthermore, you can add multiple video transformations and controls for each of them.

Resources

You may find these resources useful.

Emma Alder

Technical Writer at Hackmamba.io

Technical writer at Hackmamba.io