Create a Video Trimming App Using ffmpeg.wasm

Ifeoma Imoh

Unwanted portions of videos are often cropped out of social media posts to ensure that the content gets the attention it deserves. Most social media applications have built-in features that make this process a breeze.

In this post, we will build a simple app for trimming videos using ffmpeg.wasm — a WEBAssembly / JavaScript port of the popular FFmpeg library that allows us to use its rich media manipulation tools within the browser.

Here is a link to the demo on CodeSandbox.

Project Setup

Create a Next.js app using the following command:

1npx create-next-app ffmpeg-react

Next, we need to install our primary dependency, ffmpeg.wasm. This package has two sub-packages: @ffmpeg/core, which is the FFmpeg module's primary web assembly port, and @ffmpeg/ffmpeg, which is the library that will be used directly in our React app to interact with the former. For now, we will only install @ffmpeg/ffmpeg; subsequently, we will include @ffmpeg/core using a CDN link.

1npm i @ffmpeg/ffmpeg

It is worth noting that the ffmpeg.wasm package uses some pretty new and Advanced web APIs, one of which is SharedArrayBuffer. ffmpeg.wasm uses this API to enable the web assembly threads spawn by its @ffmpeg/core submodule to read and write to memory while performing its media manipulation.

By default, these web APIs are disabled for all web applications by most web browsers because they can be used maliciously by other cross-origin assets or windows in the same browsing context to perpetuate dangerous attacks. To avoid this, any web page that needs access to these APIs needs to tell the browser explicitly it needs access to these special APIs, which will prompt the browser to put the web page in a special state known as cross-origin-isolated.

This state ensures that our website and cross-origin assets no longer share the same browsing context. Instead, they are isolated separately, each maintaining its unique browsing context. Doing this involves setting the popular COOP(cross-origin-opener-policy) and COEP(cross-origin-embedder-policy ) headers on our main document. To set these response headers, we will need a server. The Next.js framework provides us with several ways to do server-side-related stuff within our app. Update your pages/index.js file with the following:

1function App() {
2 return null;
3}
4export default App;
5
6export async function getServerSideProps(context) {
7 // set HTTP header
8 context.res.setHeader("Cross-Origin-Opener-Policy", "same-origin");
9 context.res.setHeader("Cross-Origin-Embedder-Policy", "require-corp");
10 return {
11 props: {},
12 };
13}

In the code above, we defined and exported a getServerSideProps function, which will run on the server-side each time we request this page. This function receives a context object which contains several things, but the most important is the request and response objects. We use its setHeader method on the response object to include the COOP and COEP headers.

Breaking the App Down

Our application will consist of the moving parts shown below:

To clearly understand these components that form the building blocks, we would give a brief description of each, and where necessary, we would create the files that would house their specific logic and update their contents as needed.

At the root of your project, create a folder called components. For styles relevant to our app, copy the styles in this codeSandbox link to your styles/global.css file.

Video FilePicker

This rather intuitive component will simply allow the user to select and display a desired video file from their computer. Create a file called VideoFilePicker.js in your components folder and add the following to it:

1function VideoFilePicker({ showVideo, handleChange, children }) {
2 const FileInput = () => (
3 <label
4 htmlFor="x"
5 id={`${showVideo ? "file_picker_small" : ""}`}
6 className={`file_picker `}
7 >
8 <span>choose file</span>
9 <input onChange={handleChange} type="file" id="x" accept="video/mp4" />
10 </label>
11 );
12
13 return showVideo ? (
14 <>
15 {" "}
16 {children} <FileInput />
17 </>
18 ) : (
19 <FileInput />
20 );
21}
22
23export default VideoFilePicker;

The VideoFilePicker component accepts three props: showVideo, handleChange, and children. It creates another React component called FileInput, which renders a label and input tag that allows the user to interact with and select a file from their computer and send the file back to the parent using the handleChange prop. The label tag adds some styles based on the showVideo prop.

The VideoFilePicker component can be in one of two states based on the value of showVideo. This component either renders the FileInput component or the FileInput with the children prop. As you will see later, the children prop will represent an HTML video element.

OutputVideo

Create a file called OutputVideo.js in your components folder and add the following to it:

1const OutputVideo = ({ handleDownload, videoSrc }) => {
2 return videoSrc ? (
3 <article className="grid_txt_2">
4 <div className="bord_g_2 p_2">
5 <video src={videoSrc} autoPlay controls muted width="450"></video>
6 </div>
7 <button onClick={handleDownload} className="btn btn_g">
8 {" "}
9 download
10 </button>
11 </article>
12 ) : null;
13};
14
15export default OutputVideo;

The OutputVideo component renders two main things: the first is a video tag which will play the trimmed video data; next, it renders a button that triggers downloading the trimmed video file to our local machine.

The Video Clip Selector

This component will allow the user to select the portion of the video desired to be trimmed, provide a button to trigger the trimming process, and render some thumbnails to enable the user to visually have an Idea of the area that is being trimmed. Now create a file called RangeInput.js in your components folder we would defer updating this file for now.

Our application will also need some utility functions to help us achieve some basic tasks. Let's create a file called helpers.js within the src folder and add the following to it:

1const toTimeString = (sec, showMilliSeconds = true) => {
2 sec = parseFloat(sec);
3 let hours = Math.floor(sec / 3600);
4 let minutes = Math.floor((sec - hours * 3600) / 60);
5 let seconds = sec - hours * 3600 - minutes * 60;
6 // add 0 if value < 10; Example: 2 => 02
7 if (hours < 10) {
8 hours = "0" + hours;
9 }
10 if (minutes < 10) {
11 minutes = "0" + minutes;
12 }
13 if (seconds < 10) {
14 seconds = "0" + seconds;
15 }
16 let maltissaRegex = /\..*$/; // matches the decimal point and the digits after it e.g if the number is 4.567 it matches .567
17 let millisec = String(seconds).match(maltissaRegex);
18 return (
19 hours +
20 ":" +
21 minutes +
22 ":" +
23 String(seconds).replace(maltissaRegex, "") +
24 (showMilliSeconds ? (millisec ? millisec[0] : ".000") : "")
25 );
26};
27
28const readFileAsBase64 = async (file) => {
29 return new Promise((resolve, reject) => {
30 const reader = new FileReader();
31 reader.onload = () => {
32 resolve(reader.result);
33 };
34 reader.onerror = reject;
35 reader.readAsDataURL(file);
36 });
37};
38
39const download = (url) => {
40 const link = document.createElement("a");
41 link.href = url;
42 link.setAttribute("download", "");
43 link.click();
44};
45
46export { toTimeString, readFileAsBase64, download };

This file contains three helper methods. Starting from top to bottom, each function does the following:

  • toTimeString: accepts two parameters. The first, called sec is a number in seconds, and the second called showMilliSeconds is a boolean set to true that determines whether to include milliseconds in the output. This method takes the input in seconds and then converts it to sexagesimal, a format that looks like hours:minutes:seconds:milliseconds. We will be using this function extensively to format time when we trim the video with ffmpeg.wasm.

  • readFileAsBase64: it expects a file blob as input, and internally, it uses the FileReader API to convert it to a data URI string. Since this process is asynchronous, this function returns a Promise that resolves to the data URL if successful else, it returns an error.

  • The download: function is responsible for downloading a file on the user's local machine. It expects a data URI as its parameter, and based on this, it programmatically does the following: first, it creates an anchor tag and adds two attributes to it. The href attribute points to the data URI of the file, while the download attribute is what makes the file downloadable. Finally, the anchor tag is programmatically clicked to trigger the download.

Basic Video Trimming App

For this section, we will be updating our index.js file incrementally and explaining each step along the way. Let's start by bringing in the necessary imports. Add the following to the top of your index.js file:

1import { useState } from "react";
2import { createFFmpeg, fetchFile } from "@ffmpeg/ffmpeg";
3import * as helpers from "../utils/helpers";
4import VideoFilePicker from "../components/VideoFilePicker";
5import OutputVideo from "../components/OutputVideo";

Next, let's add the following to set up an FFmpeg instance:

1const FF = createFFmpeg({
2 log: true,
3 corePath: "https://unpkg.com/@ffmpeg/core@0.10.0/dist/ffmpeg-core.js",
4});

Using the createFFmpeg function, we set up an FFmpeg instance. We invoke this function and pass an object specifying two options: log allows the FFmpeg instance to print logs to our console, while corePath is a string that will enable us to load the main @ffmpeg/core module responsible for all our media manipulation concerns. Finally, the FFmpeg instance is stored in a variable called FF.

Next, update the file with the following:

1(async function () {
2 await FF.load();
3})();

We create a self-invoking asynchronous function that calls the load method on the FFmpeg instance. It loads the core script of our @ffmpeg/core module.

Next, let's update our main App component with several variables to manage the state of our entire app.

1function App() {
2 const [inputVideoFile, setInputVideoFile] = useState(null);
3 const [trimmedVideoFile, setTrimmedVideoFile] = useState(null);
4 const [trimIsProcessing, setTrimIsProcessing] = useState(false);
5 const [videoMeta, setVideoMeta] = useState(null);
6 const [URL, setURL] = useState(null);
7 const [rStart, setRstart] = useState(0); // 0%
8 const [rEnd, setRend] = useState(10); // 10%
9}
10export default App;

Let's go over each of them:

  • inputVideoFile: will hold the blob representing the video file selected by the user.
  • URL: will hold the data URI representation of the InputVideoFile solely for preview purposes.
  • trimmedVideoFile: will hold the trimmed version of the InputVideoFile.
  • trimIsProcessing: this boolean would manage the loading state of the trimming process.
  • videoMeta: will hold some metadata about the inputVideoFile such as the name of the video file, its duration, dimensions, etc. These would be necessary for us in the trimming process.
  • The last two variables would determine the portion of the video that would be clipped(trimmed). They would maintain any value from 0 through 100. rStart defines the start point, and rEnd defines the endpoint. We set their values to 0 and 10, respectively, meaning that 0 to 10% of the video will be trimmed. Later on, when we implement the video range selector tool, that will allow us to adjust these numbers ourselves, giving us control of the parts of the video we want to trim.

The exact values for these percentages of the trim start and end points will be computed in relation to the video’s duration specified by the videoMeta variable.

Next, let's include some functions and update our App component with the JSX to be returned:

1function App() {
2 const [inputVideoFile, setInputVideoFile] = useState(null);
3 const [trimmedVideoFile, setTrimmedVideoFile] = useState(null);
4 const [trimIsProcessing, setTrimIsProcessing] = useState(false);
5 const [videoMeta, setVideoMeta] = useState(null);
6 const [URL, setURL] = useState(null);
7 const [rStart, setRstart] = useState(0); // 0%
8 const [rEnd, setRend] = useState(10); // 10%
9
10 const handleChange = async (e) => {
11 let file = e.target.files[0];
12 console.log(file);
13 setInputVideoFile(file);
14 setURL(await helpers.readFileAsBase64(file));
15 };
16
17 const handleLoadedData = async (e) => {
18 const el = e.target;
19 const meta = {
20 name: inputVideoFile.name,
21 duration: el.duration,
22 videoWidth: el.videoWidth,
23 videoHeight: el.videoHeight,
24 };
25 console.log({ meta });
26 setVideoMeta(meta);
27 };
28
29 const handleTrim = async () => {
30 setTrimIsProcessing(true);
31 let startTime = ((rStart / 100) * videoMeta.duration).toFixed(2);
32 let offset = ((rEnd / 100) * videoMeta.duration - startTime).toFixed(2);
33 try {
34 FF.FS("writeFile", inputVideoFile.name, await fetchFile(inputVideoFile));
35 await FF.run(
36 "-ss",
37 helpers.toTimeString(startTime),
38 "-i",
39 inputVideoFile.name,
40 "-t",
41 helpers.toTimeString(offset),
42 "-c:v",
43 "copy",
44 "ping.mp4"
45 );
46 const data = FF.FS("readFile", "ping.mp4");
47 console.log(data);
48 const dataURL = await helpers.readFileAsBase64(
49 new Blob([data.buffer], { type: "video/mp4" })
50 );
51 setTrimmedVideoFile(dataURL);
52 } catch (error) {
53 console.log(error);
54 } finally {
55 setTrimIsProcessing(false);
56 }
57 };
58
59 return (
60 <main className="App">
61 <div className="u-center">
62 <button
63 onClick={handleTrim}
64 className="btn btn_b"
65 disabled={trimIsProcessing}
66 >
67 {trimIsProcessing ? "trimming..." : "trim selected"}
68 </button>
69 </div>
70 <section className="deck">
71 <article className="grid_txt_2">
72 <VideoFilePicker
73 handleChange={handleChange}
74 showVideo={!!inputVideoFile}
75 >
76 <div className="bord_g_2 p_2">
77 <video
78 src={inputVideoFile ? URL : null}
79 autoPlay
80 controls
81 muted
82 onLoadedMetadata={handleLoadedData}
83 width="450"
84 ></video>
85 </div>
86 </VideoFilePicker>
87 </article>
88 <OutputVideo
89 videoSrc={trimmedVideoFile}
90 handleDownload={() => helpers.download(trimmedVideoFile)}
91 />
92 </section>
93 </main>
94 );
95}

We updated the App component to include three functions with very intuitive names: handleChange, handleLoadedData, and handleTrim. The first two are just auxiliaries to the third, which is the epicenter of our app.

Among the JSX returned by this component is VideoFilePicker, which is passed several props, one is handleChange, which will allow the user to store any selected file in state and then construct a data URI of the input file and store it in the URL variable. It also receives a showVideo prop which is a Boolean that signifies the availability of the input file; this will be used internally to determine how VideoFilePicker will be rendered, as described earlier.

VideoFilePicker also wraps a video tag, and its src attribute receives the URL to the input video file. It accepts the onLoadedMetadata event, which fires our handleLoadedMetadata function when the browser has downloaded the metadata for the input video file. The handleLoadedData function simply extracts information about the name of the video, its duration, and its dimension and stores it in our videoMeta variable using its setter function. The OutputVideo component is also rendered. It is passed the trimmed video file via the videoSrc prop and a handleDownload prop which receives a function binding that uses the download helper method to trigger a download of the trimmed video to the user's computer.

Now let's talk about the final piece of the UI, which is a button that receives a click handler that triggers the handleTrim function to trim the video and is also enabled or disabled based on the status of the trimming process. The handleTrim function starts by toggling the trimming loading state, and then it proceeds to convert our clipping start point and endpoint to seconds.

Notice that the offset is computed as the difference between the end point and start point. Next, within a try and catch block, we do several things; firstly, for ffmpeg/wasm to see and use any of our files, we need to store it in its file system( a memory storage file system based on the MEMFS module that stores files as typed Arrays in particular Unit8Arrays in memory). We store the input video file in its filesystem using the FS method, where we specify three arguments:

  1. The operation we want to perform (write in our case).
  2. The name we want to call the file on the filesystem.
  3. The file contents.

We used the fetchFile function we imported earlier from the @ffmpeg/ffmpeg to convert our input video file blob to the expected format and store it as required.

Next, just in case this is your first time using FFmpeg, you need to know that when FFmpeg is installed on a computer, it exposes itself as a CLI tool that you interact with by writing any of its supported commands. The most common commands involve taking an input file and maybe specifying some options, manipulating it, and storing that output in a file. This is what we did next using the run method of our FFmpeg instance, where we specified the following command with the required parameters:

  • -ss: this is the seek parameter. It specifies where the trimming should start, and here we specified the sexagesimal format of the start time by the call to helpers.toTimeString(startTime). Specifying this command before the -i parameter ensures that FFmpeg starts exactly at the start point and doesn't have to scan the video from the beginning to that point.
  • -i: this is the input file parameter. Here we specified the input file we wrote to the file system in memory using our inputVideoFile.name property.
  • -t: this time parameter specifies how long we want for the trimmed video from the -ss point. Since we compted that already and stored it in the offset variable, we simply format it to sexagesimal by the call to helpers.toTimeString(offset).
  • -c: this is specified with the value copy, meaning that we want the codec (compressor and decompressor) for all streams (audio, video, and subtitle) contained in our input file copied. This makes the trimming process faster since FFmpeg does not re-encode the video.
  • ping.mp4: this is an arbitrary name we choose as the name of the output file where the trimmed video will be stored.

Next, we read the contents of the ping.mp4 file, convert it to a data URI, and store it in the trimmedVideo variable. If everything succeeds, we toggle the trimming state to false, and voila! Our video is trimmed.

Save the changes and start your application on http://localhost:3000 using the following command:

1npm run dev

For now, ffmpeg.wasm does not support all video codecs supported by the main FFmpeg package. See here for video codecs supported by ffmpeg.wasm.

Extending the Video Trimming Application with a Video Clip Selector

The current version of our app allows us to trim videos; however, it seems limited since the user can only trim a hardcoded piece of the video. It still lacks an important feature: the ability for the user to select and choose which parts of the video to trim.

We also need some images rendered within the trimming zone, as seen in most social media programs, to give a better user experience when deciding what part of a video to trim. Let's update our RangeInput.js file with the following:

1import React from "react";
2import * as helpers from "../utils/helpers";
3
4export default function RangeInput({
5 thumbNails,
6 rEnd,
7 rStart,
8 handleUpdaterStart,
9 handleUpdaterEnd,
10 loading,
11 control,
12 videoMeta,
13}) {
14 let RANGE_MAX = 100;
15 if (thumbNails.length === 0 && !loading) {
16 return null;
17 }
18 if (loading) {
19 return (
20 <center>
21 <h2> processing thumbnails.....</h2>
22 </center>
23 );
24 }
25
26 return (
27 <>
28 <div className="range_pack">
29 <div className="image_box">
30 {thumbNails.map((imgURL, id) => (
31 <img src={imgURL} alt={`sample_video_thumbnail_${id}`} key={id} />
32 ))}
33 <div
34 className="clip_box"
35 style={{
36 width: `calc(${rEnd - rStart}% )`,
37 left: `${rStart}%`,
38 }}
39 data-start={helpers.toTimeString(
40 (rStart / RANGE_MAX) * videoMeta.duration,
41 false
42 )}
43 data-end={helpers.toTimeString(
44 (rEnd / RANGE_MAX) * videoMeta.duration,
45 false
46 )}
47 >
48 <span className="clip_box_des"></span>
49 <span className="clip_box_des"></span>
50 </div>
51 <input
52 className="range"
53 type="range"
54 min={0}
55 max={RANGE_MAX}
56 onInput={handleUpdaterStart}
57 value={rStart}
58 />
59 <input
60 className="range"
61 type="range"
62 min={0}
63 max={RANGE_MAX}
64 onInput={handleUpdaterEnd}
65 value={rEnd}
66 />
67 </div>
68 </div>
69 {control}
70 </>
71 );
72}

The RangeInput component expects several props, which are as follows:

  1. thumbnails: this will be an array of images in the form of data URIs that would be rendered within the clipping region.
  2. rStart, rEnd, handleUpdaterStart, and handleUpdaterEnd: these refer to the start and end clipping points and their respective functions to update their values.
  3. loading: the process of generating the thumbnails is asynchronous; this boolean gives information about the loading state.
  4. control: this is a React component that would trigger trimming.
  5. videoMeta: this object represents the video metadata.

This component returns several things based on the loading state and the availability of thumbnails. If it is in the process of generating the thumbnails, i.e., when the loading prop is true, it returns some text informing the user about that. If the thumbnails have been generated, it renders several components, the most notable are two input fields of type range for adjusting the start and end trim positions. We also render a box that is dynamically styled and positioned based on the trim positions.

We need to update our index.js file to use our new RangeInput component. Let's start by importing this component. Add this to the top of your index.js file:

1import RangeInput from "../components/RangeInput";

Next, within the App component, let's make the following updates to include the necessary logic needed to render the RangeInput component:

1function App() {
2 //...
3 const [thumbnails, setThumbnails] = useState([]);
4 const [thumbnailIsProcessing, setThumbnailIsProcessing] = useState(false);
5
6 const handleChange = async (e) => {
7 //...
8 };
9
10 const handleUpdateRange = (func) => {
11 return ({ target: { value } }) => {
12 func(value);
13 };
14 };
15
16 const getThumbnails = async ({ duration }) => {
17 if (!FF.isLoaded()) await FF.load();
18 setThumbnailIsProcessing(true);
19 let MAX_NUMBER_OF_IMAGES = 15;
20 let offset =
21 duration === MAX_NUMBER_OF_IMAGES ? 1 : duration / NUMBER_OF_IMAGES;
22 let NUMBER_OF_IMAGES = duration < MAX_NUMBER_OF_IMAGES ? duration : 15;
23 FF.FS("writeFile", inputVideoFile.name, await fetchFile(inputVideoFile));
24 const arrayOfImageURIs = [];
25 for (let i = 0; i < NUMBER_OF_IMAGES; i++) {
26 let startTimeInSecs = helpers.toTimeString(Math.round(i * offset));
27 if (startTimeInSecs + offset > duration && offset > 1) {
28 offset = 0;
29 }
30 try {
31 await FF.run(
32 "-ss",
33 startTimeInSecs,
34 "-i",
35 inputVideoFile.name,
36 "-t",
37 "00:00:1.000",
38 "-vf",
39 `scale=150:-1`,
40 `img${i}.png`
41 );
42 const data = FF.FS("readFile", `img${i}.png`);
43 let blob = new Blob([data.buffer], { type: "image/png" });
44 let dataURI = await helpers.readFileAsBase64(blob);
45 arrayOfImageURIs.push(dataURI);
46 FF.FS("unlink", `img${i}.png`);
47 } catch (error) {
48 console.log({ message: error });
49 }
50 }
51 setThumbnailIsProcessing(false);
52 return arrayOfImageURIs;
53 };
54
55 const handleLoadedData = async (e) => {
56 // console.dir(ref.current);
57 //...
58 const thumbnails = await getThumbnails(meta);
59 setThumbnails(thumbNails);
60 };
61
62 const handleTrim = async () => {
63 //...
64 };
65
66 return (
67 <main className="App">
68 {
69 <>
70 <RangeInput
71 rEnd={rEnd}
72 rStart={rStart}
73 handleUpdaterStart={handleUpdateRange(setRstart)}
74 handleUpdaterEnd={handleUpdateRange(setRend)}
75 loading={thumbnailIsProcessing}
76 videoMeta={videoMeta}
77 control={
78 <div className="u-center">
79 <button
80 onClick={handleTrim}
81 className="btn btn_b"
82 disabled={trimIsProcessing}
83 >
84 {trimIsProcessing ? "trimming..." : "trim selected"}
85 </button>
86 </div>
87 }
88 thumbNails={thumbnails}
89 />
90 </>
91 }
92 <section className="deck">//...</section>
93 </main>
94 );
95}

We started by including two new state variables: thumbnails, which represents the array of data URLs for the images in the trimming area, and thumbnailIsProcessing to manage the loading state of the thumbnail generation process. Next, we defined some functions and updated an existing function. The handleUpdateRange function will update our rStart and rEnd variables.

Internally, the handleUpdateRange function returns a function binding that will receive an event object when called by the RangeInput input fields. The event object is then destructured to get the value fed to the callback passed.

We also defined the getThumbnails function, which is responsible for getting some images from the input video file, storing them in an array, and returning that array. This function starts by ensuring that the FFmpeg core script is loaded to avoid errors. Next, it toggles the loading state of the thumbnailIsProcessing boolean. The next four lines are essential for understanding the rest of the code. We know that a video is a collection of moving images, but we are not interested in getting all the images. Firstly we defined a variable that stores the maximum number of images we want to extract. We specified a hardcoded value of 15 but getting 15 images may not always be possible, so we defined a variable called NUMBER_OF_IMAGES that determines the final number of images that will be rendered based on the video's duration in seconds.

We want to ensure that these images are captured at different points in the video (e.g., for a 30 seconds video file, if we want 15 images, we would like to take each shot every 2 seconds, and this is what the offset variable is set to compute. Next, we defined an array called arrayOfImageURIs that would hold the URIs of the thumbnails. Then we proceed to store our input file into the FFmpeg memory storage so that it can be manipulated by the module.

Within a loop, we extract each image. We start by computing where we want to skip to the video to take the shot, and then we run some commands. After specifying where to skip to, since we are only interested in getting one image, we proceed to trim and process only 1 second of the video as indicated by the -t parameter. Also, the video’s resolution may be high. Since we are only interested in capturing an image with a small resolution, we take our video and then apply some video filters to it as indicated by the -vf parameter. The FFmpeg module supports many video filters, but the one we want is a scale operation. Specifying scale=150:-1 means that we want the video to have a width of 150, and the height should be -1, meaning that it should be automatically determined based on the width to maintain the video’s aspect ratio.

Finally, we capture and store a PNG image with a dynamically created name and store it in the file system. Next, we read this file again from memory and convert it to a blob and then finally to a data URI, after which it is pushed to arayOfImageURIs. To clean up, the file is deleted from memory. If everything goes well, the array holding the data URIs of the images is then returned.

You may be wondering when or where we can get to call the getThumbnails function. Well, that's easy; we can do that immediately after the user selects a video file from their computer and that video file has its metadata loaded by the browser. We update the handleLoadedData function with two new lines to invoke this function and store its contents by calling the setThumbnails function.

You can also see that we updated the return statement with the RangeInput component passing all the necessary props. One noteworthy prop is the control prop, where we passed the button we used earlier that triggers the handleTrim function. This ensures the button is only rendered when the thumbnails in the clipping region have been rendered by the RangeInput component.

If you save the changes and head over to your browser, you can see our video trimming application working as expected.

Find the complete project here on GitHub.

Conclusion

FFmpeg has been a huge player in projects like Youtube, Netflix, Vimeo, etc., handling and managing media manipulation concerns at a large scale. With projects like FFmpeg.wasm bringing this power to the browser, possibilities are limitless to what we can achieve with this tool.

Ifeoma Imoh

Software Developer

Ifeoma is a software developer and technical content creator in love with all things JavaScript.