Introduction
Video subtitles provide a better viewing experience and also improve accessibility for persons with disabilities. Manually adding subtitles to videos, however, proves to be repetitive, boring, and a lot of work. Luckily, there's a way we can automate this. In this tutorial, we'll take a look at how to automatically add subtitles to videos using Google video intelligence, Cloudinary and Next.js.
Codesandbox
The final project can be viewed on Codesandbox.
Setup
Working Knowledge of Javascript is required. Familiarity with React, Node.js, and Next.js is also recommended although not required. Ensure you have Node.js and NPM installed in your development environment.
Create a new Next.js project by running the following command in your terminal.
1npx create-next-app video-subtitles-with-google-video-intelligence
This scaffolds a minimal Next.js project. You can check out the Next.js docs for more setup options. Proceed to open your project in your favorite code editor.
Cloudinary API Keys
Cloudinary offers a suite of APIs that allow developers to upload media, apply transformations and optimize delivery. You can get started with a free account immediately. Create a new account at Cloudinary if you do not have one then login and navigate to the console page. Here you'll find your Cloud name
API Key
and API Secret
.
Back in your project, create a new file at the root of your project and name it .env.local
. Paste the following inside.
1CLOUD_NAME=YOUR_CLOUD_NAME23API_KEY=YOUR_API_KEY45API_SECRET=YOUR_API_SECRET
Replace YOUR_CLOUD_NAME
YOUR_API_KEY
and YOUR_API_SECRET
with the appropriate values that we just got from the cloudinary console page.
What we've just done here is define some environment variables. These help us to keep sensitive keys and secrets away from our codebase. Next.js has built-in support for environment variables. Read about this in the docs.
Do not check the
.env.local
file into source control
Google Cloud Project and credentials
The Video Intelligence API is provided by Google through the Google Cloud Platform. It contains several AI-powered features that allow for things such as face detection, label detection, video transcription, and more. Today we'll be using the video transcription feature.
If you are familiar with GCP, you can follow the quickstart guide.
Create an account if you do not already have one then navigate to the project selector page.
You then need to select an existing project or create a new one. Ensure that billing is enabled for the project. Google APIs have a free tier with a monthly limit that you can get started with. Use the APIs with caution so as not to exceed your limits. Here's how you can confirm that billing is enabled..
The next step is to enable the APIs that you will be using with that project. In our case, it's just the Video Intelligence API. Here's how to enable the Video Intelligence API.
Once you've enabled the API, you need to create a new service account. Service accounts allow our application to authenticate with google and communicate with the GCP APIs. Go to the create a new service account page and select the project you created earlier. You will need to input an appropriate name for the service account. You can use the same name we used to create our Next.js project, video-subtitles-with-google-video-intelligence
.
Go ahead and finish creating the account. You can leave the other options as they are. Go back to the service accounts dashboard and you'll now see your recently created service account. Under the more actions button, click on Manage keys.
Click on Add key and then on Create new key
In the pop-up dialog, make sure to choose the JSON option.
Once you're done, a .json
file will be downloaded to your computer.
Add the following to the .env.local
file that we created earlier.
1GCP_PROJECT_ID=YOUR_GCP_PROJECT_ID23GCP_PRIVATE_KEY=YOUR_GCP_PRIVATE_KEY45GCP_CLIENT_EMAIL=YOUR_GCP_CLIENT_EMAIL
Replace YOUR_GCP_PROJECT_ID
,YOUR_GCP_PRIVATE_KEY
and YOUR_GCP_CLIENT_EMAIL
, with project_id
,private_key
and client_email
respectively from the .json
file that was downloaded above.
Dependencies
The final step in the setup is to install the required dependencies. We need google video intelligence, cloudinary, formidable, and date-fns. We'll use formidable to help us parse incoming form data, this will allow us to upload videos from the frontend. date-fns is a library of date and time utilities.
Run the following command in your terminal
1npm install cloudinary formidable date-fns @google-cloud/video-intelligence
Getting started
Create a new folder at the root of your project and call it lib
. This folder will contain all our shared code. Create a file named parse-form.js
under the lib
folder and paste the following inside.
1// lib/parse-form.js2345import { IncomingForm } from "formidable";6789/**1011* Parses the incoming form data.1213*1415* @param {NextApiRequest} req The incoming request object1617*/1819export const parseForm = (req) => {2021return new Promise((resolve, reject) => {2223const form = new IncomingForm({ keepExtensions: true, multiples: true });24252627form.parse(req, (error, fields, files) => {2829if (error) {3031return reject(error);3233}34353637return resolve({ fields, files });3839});4041});4243};
This file just sets up formidable so that we can be able to parse incoming form data. Read more in the formidable docs.
Create another file under lib
and name it cloudinary.js
. Paste the following code inside lib/cloudinary.js
.
1// lib/cloudinary.js2345// Import the v2 api and rename it to cloudinary67import { v2 as cloudinary } from "cloudinary";891011// Initialize the SDK with cloud_name, api_key, and api_secret1213cloudinary.config({1415cloud_name: process.env.CLOUD_NAME,1617api_key: process.env.API_KEY,1819api_secret: process.env.API_SECRET,2021});22232425const CLOUDINARY_FOLDER_NAME = "automatic-subtitles/";26272829/**3031* Get cloudinary upload3233*3435* @param {string} id3637* @returns {Promise}3839*/4041export const handleGetCloudinaryUpload = (id) => {4243return cloudinary.api.resource(id, {4445type: "upload",4647prefix: CLOUDINARY_FOLDER_NAME,4849resource_type: "video",5051});5253};54555657/**5859* Get cloudinary uploads6061* @returns {Promise}6263*/6465export const handleGetCloudinaryUploads = () => {6667return cloudinary.api.resources({6869type: "upload",7071prefix: CLOUDINARY_FOLDER_NAME,7273resource_type: "video",7475});7677};78798081/**8283* Uploads a video to cloudinary and returns the upload result8485*8687* @param {{path: string; transformation?:TransformationOptions;publicId?: string; folder?: boolean; }} resource8889*/9091export const handleCloudinaryUpload = (resource) => {9293return cloudinary.uploader.upload(resource.path, {9495// Folder to store video in9697folder: resource.folder ? CLOUDINARY_FOLDER_NAME : null,9899// Public id of video.100101public_id: resource.publicId,102103// Type of resource104105resource_type: "auto",106107// Transformation to apply to the video108109transformation: resource.transformation,110111});112113};114115116117/**118119* Deletes resources from cloudinary. Takes in an array of public ids120121* @param {string[]} ids122123*/124125export const handleCloudinaryDelete = (ids) => {126127return cloudinary.api.delete_resources(ids, {128129resource_type: "video",130131});132133};
This file contains all the functions we need to communicate with cloudinary. We first import the v2 API from the cloudinary SDK and rename it to cloudinary. We then initialize it by calling the config
method and passing the cloud name, api key, and api secret.
CLOUDINARY_FOLDER_NAME
is the folder where we'll store all our videos. This will make it easier for us to get all the uploads later.
handleGetCloudinaryUpload
takes in a public id and gets a single resource from cloudinary by calling the api.resource
method on the cloudinary SDK. Read more about this method in the official docs
handleGetCloudinaryUploads
calls the api.resources
method to get all resources uploaded to the folder that we defined under the CLOUDINARY_FOLDER_NAME
variable. Read about this method in the docs
handleCloudinaryUpload
takes in an object containing the path to the file we want to upload and any transformations that we want to apply to the file. It calls the uploader.upload
method on the SDK. Read about this method here
handleCloudinaryDelete
takes in an array of public IDs and passes them to the api.delete_resources
method for deletion. Read more about this here.
Create a new file under the lib
folder and name it google.js
. Paste the following inside lib/google.js
.
1// lib/google.js2345import {67VideoIntelligenceServiceClient,89protos,1011} from "@google-cloud/video-intelligence";12131415const client = new VideoIntelligenceServiceClient({1617// Google cloud platform project id1819projectId: process.env.GCP_PROJECT_ID,2021credentials: {2223client_email: process.env.GCP_CLIENT_EMAIL,2425private_key: process.env.GCP_PRIVATE_KEY.replace(/\\n/gm, "\n"),2627},2829});30313233/**3435*3637* @param {string | Uint8Array} inputContent3839* @returns {Promise<protos.google.cloud.videointelligence.v1.VideoAnnotationResults>}4041*/4243export const analyzeVideoTranscript = async (inputContent) => {4445// Grab the operation using array destructuring. The operation is the first object in the array.4647const [operation] = await client.annotateVideo({4849// Input content5051inputContent: inputContent,5253// Video Intelligence features5455features: ["SPEECH_TRANSCRIPTION"],5657// Video context settings5859videoContext: {6061speechTranscriptionConfig: {6263languageCode: "en-US",6465enableAutomaticPunctuation: true,6667},6869},7071});72737475const [operationResult] = await operation.promise();76777879// Gets annotations for video8081const [annotations] = operationResult.annotationResults;82838485return annotations;8687};
We create a new client and pass it the project id and a credentials object. Here's the different ways you can authenticate the client. The analyzeVideoTranscript
takes in a string or a buffer array and then calls the client's annotateVideo
method with a few options. Read more about these options in the docs. Take note of the features option. We need to tell Google what operation to run. In this case, we only pass the SPEECH_TRANSCRIPTION
. Read more about this here.
We call promise()
on the operation and await for the promise to be complete. We then get the operation result using Javascript's destructuring. To understand the structure of the resulting data, take a look at the official documentation. We then proceed to get the first item in the annotation results and return that.
Create a new folder called videos
under pages/api
. Create two files inside pages/api/videos
, one called index.js
and [...id].js
. If you're not familiar with API routes in Next.js, have a look at this documentation. [...id].js
is an example of dynamic routing in Next.js. This particular syntax is designed to catch all routes. Read about this here.
Paste the following code inside pages/api/videos/index.js
1// pages/api/videos/index.js2345import {67handleCloudinaryUpload,89handleGetCloudinaryUploads,1011} from "../../../lib/cloudinary";1213import { parseForm } from "../../../lib/parse-form";1415import { promises as fs } from "fs";1617import { analyzeVideoTranscript } from "../../../lib/google";1819import { intervalToDuration } from "date-fns";20212223// Custom config for our API route2425export const config = {2627api: {2829bodyParser: false,3031},3233};34353637/**3839*4041* @param {NextApiRequest} req4243* @param {NextApiResponse} res4445*/4647export default async function handler(req, res) {4849switch (req.method) {5051case "GET": {5253try {5455const result = await handleGetRequest();56575859return res.status(200).json({ message: "Success", result });6061} catch (error) {6263console.error(error);6465return res.status(400).json({ message: "Error", error });6667}6869}70717273case "POST": {7475try {7677const result = await handlePostRequest(req);78798081return res.status(201).json({ message: "Success", result });8283} catch (error) {8485console.error(error);8687return res.status(400).json({ message: "Error", error });8889}9091}92939495default: {9697return res.status(405).json({ message: "Method Not Allowed" });9899}100101}102103}104105106107const handleGetRequest = async () => {108109const uploads = await handleGetCloudinaryUploads();110111112113return uploads;114115};116117118119/**120121* Handles the POST request to the API route.122123*124125* @param {NextApiRequest} req The incoming request object126127*/128129const handlePostRequest = async (req) => {130131// Get the form data using the parseForm function132133const data = await parseForm(req);134135136137// Get the video file from the form data138139const { video } = data.files;140141142143// Read the contents of the video file144145const videoFile = await fs.readFile(video.filepath);146147148149// Get the base64 encoded video file150151const base64Video = videoFile.toString("base64");152153154155// Analyze the video transcript using Google's video intelligence API156157const annotations = await analyzeVideoTranscript(base64Video);158159160161// Map through the speech transcriptions gotten from the annotations162163const allSentences = annotations.speechTranscriptions164165.map((speechTranscription) => {166167// Map through the speech transcription's alternatives. For our case, it's just one168169return speechTranscription.alternatives170171.map((alternative) => {172173// Get the word segments from the speech transcription174175const words = alternative.words ?? [];176177178179// Place the word segments into groups of ten180181const groupOfTens = words.reduce((group, word, arr) => {182183return (184185(arr % 10186187? group[group.length - 1].push(word)188189: group.push([word])) && group190191);192193}, []);194195196197// Map through the word groups and build a sentence with the start time and end time198199return groupOfTens.map((group) => {200201// Start offset time in seconds202203const startOffset =204205parseInt(group[0].startTime.seconds ?? 0) +206207(group[0].startTime.nanos ?? 0) / 1000000000;208209210211// End offset time in seconds212213const endOffset =214215parseInt(group[group.length - 1].endTime.seconds ?? 0) +216217(group[group.length - 1].endTime.nanos ?? 0) / 1000000000;218219220221return {222223startTime: startOffset,224225endTime: endOffset,226227sentence: group.map((word) => word.word).join(" "),228229};230231});232233})234235.flat();236237})238239.flat();240241242243// Build the subtitle file content244245const subtitleContent = allSentences246247.map((sentence, index) => {248249// Format the start time250251const startTime = intervalToDuration({252253start: 0,254255end: sentence.startTime * 1000,256257});258259260261// Format the end time262263const endTime = intervalToDuration({264265start: 0,266267end: sentence.endTime * 1000,268269});270271272273return `${index + 1}\n${startTime.hours}:${startTime.minutes}:${274275startTime.seconds276277},000 --> ${endTime.hours}:${endTime.minutes}:${endTime.seconds},000\n${278279sentence.sentence280281}`;282283})284285.join("\n\n");286287288289const subtitlePath = `public/subtitles/subtitle.srt`;290291292293// Write the subtitle file to the filesystem294295await fs.writeFile(subtitlePath, subtitleContent);296297298299// Upload the subtitle file to Cloudinary300301const subtitleUploadResult = await handleCloudinaryUpload({302303path: subtitlePath,304305folder: false,306307});308309310311// Delete the subtitle file from the filesystem312313await fs.unlink(subtitlePath);314315316317// Upload the video file to Cloudinary and apply the subtitle file as an overlay/layer318319const videoUploadResult = await handleCloudinaryUpload({320321path: video.filepath,322323folder: true,324325transformation: [326327{328329background: "black",330331color: "yellow",332333overlay: {334335font_family: "Arial",336337font_size: "32",338339font_weight: "bold",340341resource_type: "subtitles",342343public_id: subtitleUploadResult.public_id,344345},346347},348349{ flags: "layer_apply" },350351],352353});354355356357return videoUploadResult;358359};
This is where the magic happens. At the top, we export a custom config object. This config object tells Next.js not to use the default body parser since we'll be parsing the form data on our own. Read about custom config in API routes here. The default exported function named handler
is standard for Next.js API routes. We use a switch statement to only handle GET and POST requests.
handleGetRequest
gets all the uploaded resources by calling the handleGetCloudinaryUploads
function that we created earlier.
handlePostRequest
takes in the incoming request object. We use the parseForm
method that we created in the parse-form.js
file to get the form data. We then get the video file, get the base64 string and pass it to analyzeVideoTranscript
. This video transcribes the video using Google Video Intelligence. The following is the structure of the data that we get back
1{23segment: {45startTimeOffset: {67seconds: string;89nanos: number;1011};1213endTimeOffset: {1415seconds: string;1617nanos: number;1819};2021};2223speechTranscriptions: [2425{2627alternatives: [2829{3031transcript: string;3233confidence: number;3435words: [3637{3839startTime: {4041seconds: string;4243nanos: number;4445};4647endTime: {4849seconds: string;5051nanos: number;5253};5455word: string;5657}5859];6061}6263];6465languageCode: string;6667}6869];7071}
You can also check out some sample data here
We need to convert that to the following structure
1[23{45startTime: number;67endTime: number;89sentence: string;1011}1213]
To achieve this we map through the annotations.speechTranscriptions
, then the alternatives
for each speech transcription. Google returns each word separate from its start and end time. We put those words in groups of ten so that we can form sentences with ten words. We don't want our sentences to be too long. We then join the group of words to make a sentence and flatten everything.
Next, we need to create a subtitle file. Let's have a look at the structure of a subtitles(srt) file.
1number23hour:minute:second,millisecond --> hour:minute:second,millisecond45sentence6789number1011hour:minute:second,millisecond --> hour:minute:second,millisecond1213sentence
For example
112300:01:20,000 --> 00:01:30,00045This is the first frame67892101100:01:31,000 --> 00:01:40,0001213This is the second frame
We model our data into this format in the following piece of code
1// Build the subtitle file content23const subtitleContent = allSentences45.map((sentence, index) => {67// Format the start time89const startTime = intervalToDuration({1011start: 0,1213end: sentence.startTime * 1000,1415});16171819// Format the end time2021const endTime = intervalToDuration({2223start: 0,2425end: sentence.endTime * 1000,2627});28293031return `${index + 1}\n${startTime.hours}:${startTime.minutes}:${3233startTime.seconds3435},000 --> ${endTime.hours}:${endTime.minutes}:${endTime.seconds},000\n${3637sentence.sentence3839}`;4041})4243.join("\n\n");
We then use writeFile
to create a new subtitle file. We upload the subtitle file to cloudinary. After this is done we upload our video to cloudinary and apply the subtitle file as a layer. Read about how this works in the cloudinary docs.
Moving on to the [...id].js
file. Paste the following inside pages/api/videos/[...id].js
1// pages/api/videos/[...id].js`2345import { NextApiRequest, NextApiResponse } from "next";67import {89handleCloudinaryDelete,1011handleGetCloudinaryUpload,1213} from "../../../lib/cloudinary";14151617/**1819*2021* @param {NextApiRequest} req2223* @param {NextApiResponse} res2425*/2627export default async function handler(req, res) {2829const id = Array.isArray(req.query.id)3031? req.query.id.join("/")3233: req.query.id;34353637switch (req.method) {3839case "GET": {4041try {4243const result = await handleGetRequest(id);44454647return res.status(200).json({ message: "Success", result });4849} catch (error) {5051console.error(error);5253return res.status(400).json({ message: "Error", error });5455}5657}58596061case "DELETE": {6263try {6465const result = await handleDeleteRequest(id);66676869return res.status(200).json({ message: "Success", result });7071} catch (error) {7273console.error(error);7475return res.status(400).json({ message: "Error", error });7677}7879}80818283default: {8485return res.status(405).json({ message: "Method Not Allowed" });8687}8889}9091}92939495/**9697* Gets a single resource from Cloudinary.9899*100101* @param {string} id Public ID of the video to get102103*/104105const handleGetRequest = async (id) => {106107const upload = await handleGetCloudinaryUpload(id);108109110111return upload;112113};114115116117/**118119* Handles the DELETE request to the API route.120121*122123* @param {string} id Public ID of the video to delete124125*/126127const handleDeleteRequest = (id) => {128129// Delete the uploaded image from Cloudinary130131return handleCloudinaryDelete([id]);132133};
handleGetRequest
calls the handleGetCloudinaryUpload
with the public id of the video and gets the uploaded video.
handleDeleteRequest
just deletes the resource with the given public id.
Let's move on to the frontend. Add the following code to styles/globals.css
1:root {23--color-primary: #ff0000;45--color-primary-light: #ff4444;67}891011.btn {1213display: inline-block;1415padding: 0.5rem 1rem;1617background: var(--color-primary);1819color: #ffffff;2021border: none;2223border-radius: 0.25rem;2425cursor: pointer;2627font-size: 1rem;2829font-weight: bold;3031text-transform: uppercase;3233text-decoration: none;3435text-align: center;3637transition: all 0.2s ease-in-out;3839}40414243.btn:hover {4445background: var(--color-primary-light);4647box-shadow: 0 0 0.25rem 0 rgba(0, 0, 0, 0.25);4849}
These are just a few styles to help us with the UI.
Create a new folder at the root of your project and name it components
. This folder will hold our shared components. Create a new file under components
called Layout.js
and paste the following code inside.
1// components/Layout.js234import Head from "next/head";56import Link from "next/link";789export default function Layout({ children }) {1011return (1213<div>1415<Head>1617<title>1819Add subtitles to videos using google video intelligence and cloudinary2021</title>2223<meta2425name="description"2627content=" Add subtitles to videos using google video intelligence and cloudinary"2829/>3031<link rel="icon" href="/favicon.ico" />3233</Head>3435<nav>3637<ul>3839<li>4041<Link href="/">4243<a className="btn">Home</a>4445</Link>4647</li>4849<li>5051<Link href="/videos">5253<a className="btn">Videos</a>5455</Link>5657</li>5859</ul>6061</nav>6263<main>{children}</main>6465<style jsx>{`6667nav {6869background-color: #f0f0f0;7071min-height: 100px;7273display: flex;7475align-items: center;7677}78798081nav ul {8283list-style: none;8485padding: 0 32px;8687flex: 1;8889display: flex;9091flex-flow: row nowrap;9293justify-content: center;9495gap: 8px;9697}9899`}</style>100101</div>102103);104105}
We'll be wrapping our pages in this component. It allows us to have a consistent layout. Paste the following code inside pages/index.js
1// pages/index.js2345import { useRouter } from "next/router";67import { useState } from "react";89import Layout from "../components/Layout";10111213export default function Home() {1415const router = useRouter();1617const [isLoading, setIsLoading] = useState(false);18192021const handleFormSubmit = async (event) => {2223event.preventDefault();24252627try {2829setIsLoading(true);30313233const formData = new FormData(event.target);34353637const response = await fetch("/api/videos", {3839method: "POST",4041body: formData,4243});44454647const data = await response.json();48495051if (!response.ok) {5253throw data;5455}56575859router.push("/videos");6061} catch (error) {6263console.error(error);6465} finally {6667setIsLoading(false);6869}7071};72737475return (7677<Layout>7879<div className="wrapper">8081<form onSubmit={handleFormSubmit}>8283<h2>Upload video file</h2>8485<div className="input-group">8687<label htmlFor="video">Video File</label>8889<input9091type="file"9293name="video"9495id="video"9697accept=".mp4,.mov,.mpeg4,.avi"9899multiple={false}100101required102103disabled={isLoading}104105/>106107</div>108109<button className="btn" type="submit" disabled={isLoading}>110111Upload112113</button>114115<button className="btn" type="reset" disabled={isLoading}>116117Cancel118119</button>120121</form>122123</div>124125<style jsx>{`126127div.wrapper {128129}130131132133div.wrapper > form {134135margin: 64px auto;136137background-color: #fdd8d8;138139padding: 40px 20px;140141width: 60%;142143display: flex;144145flex-flow: column;146147gap: 8px;148149border-radius: 0.25rem;150151}152153154155div.wrapper > form > div.input-group {156157display: flex;158159flex-flow: column;160161gap: 8px;162163}164165166167div.wrapper > form > div.input-group > label {168169font-weight: bold;170171}172173174175div.wrapper > form > div.input-group > input {176177background-color: #f5f5f5;178179}180181182183div.wrapper > form > button {184185height: 50px;186187}188189`}</style>190191</Layout>192193);194195}
This is a simple page with a form for uploading the video that we want to add subtitles to. handleFormSubmit
makes a POST request to /api/videos
with the form data and then navigates to the /videos
page upon success.
Create a new folder under the pages
folder and call it videos
. Create two files under pages/videos
called index.js
and [...id].js
. Please note that this is not the same as the pages/api/videos
folder. Paste the following code inside pages/videos/index.js
1// pages/videos/index.js2345import Link from "next/link";67import Image from "next/image";89import { useCallback, useEffect, useState } from "react";1011import Layout from "../../components/Layout";12131415export default function VideosPage() {1617const [isLoading, setIsLoading] = useState(false);1819const [videos, setVideos] = useState([]);20212223const getVideos = useCallback(async () => {2425try {2627setIsLoading(true);28293031const response = await fetch("/api/videos", {3233method: "GET",3435});36373839const data = await response.json();40414243if (!response.ok) {4445throw data;4647}48495051setVideos(data.result.resources);5253console.log(data);5455} catch (error) {5657// TODO: Show error message to the user5859console.error(error);6061} finally {6263setIsLoading(false);6465}6667}, []);68697071useEffect(() => {7273getVideos();7475}, [getVideos]);76777879return (8081<Layout>8283<div className="wrapper">8485<div className="videos-wrapper">8687{videos.map((video, index) => {8889const splitVideoUrl = video.secure_url.split(".");90919293splitVideoUrl[splitVideoUrl.length - 1] = "jpg";94959697const thumbnail = splitVideoUrl.join(".");9899100101return (102103<div className="video-wrapper" key={`video-${index}`}>104105<div className="thumbnail">106107<Image108109src={thumbnail}110111alt={video.secure_url}112113layout="fill"114115></Image>116117</div>118119<div className="actions">120121<Link122123href="/videos/[...id]"124125as={`/videos/${video.public_id}`}126127>128129<a>Open Video</a>130131</Link>132133</div>134135</div>136137);138139})}140141</div>142143</div>144145146147{!isLoading && videos.length === 0 ? (148149<div className="no-videos">150151<b>No videos yet</b>152153<Link href="/" passHref>154155<button className="btn">Upload Video</button>156157</Link>158159</div>160161) : null}162163164165{isLoading ? (166167<div className="loading">168169<b>Loading...</b>170171</div>172173) : null}174175176177<style jsx>{`178179div.wrapper {180181min-height: 100vh;182183}184185186187div.wrapper h1 {188189text-align: center;190191}192193194195div.wrapper div.videos-wrapper {196197padding: 20px;198199display: flex;200201flex-flow: row wrap;202203gap: 20px;204205}206207208209div.wrapper div.videos-wrapper div.video-wrapper {210211flex: 0 0 400px;212213height: 400px;214215}216217218219div.wrapper div.videos-wrapper div.video-wrapper div.thumbnail {220221position: relative;222223width: 100%;224225height: 80%;226227}228229230231div.loading,232233div.no-videos {234235height: 100vh;236237display: flex;238239flex-flow: column;240241justify-content: center;242243align-items: center;244245gap: 8px;246247}248249`}</style>250251</Layout>252253);254255}
This page will call getVideos
when it renders. getVideos
makes a GET request to /api/videos
to get all the uploaded videos. You can read about the useCallback
and useEffect
react hooks from the react docs. We then show thumbnails of the videos. See here on how to generate a thumbnail of a cloudinary video.
And now for the final page. Paste the following inside pages/videos/[...id].js
1// pages/videos/[...id].js2345import { useRouter } from "next/router";67import { useCallback, useEffect, useState } from "react";89import Layout from "../../components/Layout";10111213export default function VideoPage() {1415const router = useRouter();16171819const id = Array.isArray(router.query.id)2021? router.query.id.join("/")2223: router.query.id;24252627const [isLoading, setIsLoading] = useState(false);2829const [video, setVideo] = useState(null);30313233const getVideo = useCallback(async () => {3435try {3637setIsLoading(true);3839const response = await fetch(`/api/videos/${id}`, {4041method: "GET",4243});44454647const data = await response.json();48495051if (!response.ok) {5253throw data;5455}56575859setVideo(data.result);6061console.log(data);6263} catch (error) {6465// TODO: Show error message to the user6667console.error(error);6869} finally {7071setIsLoading(false);7273}7475}, [id]);76777879useEffect(() => {8081getVideo();8283}, [getVideo]);84858687const handleDownload = async () => {8889try {9091setIsLoading(true);92939495const response = await fetch(video.secure_url, {});96979899if (response.ok) {100101const blob = await response.blob();102103104105const fileUrl = URL.createObjectURL(blob);106107108109const a = document.createElement("a");110111a.href = fileUrl;112113a.download = `${video.public_id.replace("/", "-")}.${video.format}`;114115document.body.appendChild(a);116117a.click();118119a.remove();120121return;122123}124125126127throw await response.json();128129} catch (error) {130131// TODO: Show error message to the user132133console.error(error);134135} finally {136137setIsLoading(false);138139}140141};142143144145const handleDelete = async () => {146147try {148149setIsLoading(true);150151152153const response = await fetch(`/api/videos/${id}`, {154155method: "DELETE",156157});158159160161const data = await response.json();162163164165if (!response.ok) {166167throw data;168169}170171172173router.replace("/videos");174175} catch (error) {176177console.error(error);178179} finally {180181setIsLoading(false);182183}184185};186187188189return (190191<Layout>192193{video && !isLoading ? (194195<div className="wrapper">196197<div className="video-wrapper">198199<video src={video.secure_url} controls></video>200201<div className="actions">202203<button204205className="btn"206207onClick={handleDownload}208209disabled={isLoading}210211>212213Download214215</button>216217<button218219className="btn"220221onClick={handleDelete}222223disabled={isLoading}224225>226227Delete228229</button>230231</div>232233</div>234235</div>236237) : null}238239240241{isLoading ? (242243<div className="loading">244245<b>Loading...</b>246247</div>248249) : null}250251252253<style jsx>{`254255div.wrapper {256257}258259260261div.wrapper > div.video-wrapper {262263width: 80%;264265margin: 20px auto;266267display: flex;268269flex-flow: column;270271gap: 8px;272273}274275276277div.wrapper > div.video-wrapper > video {278279width: 100%;280281}282283284285div.wrapper > div.video-wrapper > div.actions {286287display: flex;288289flex-flow: row;290291gap: 8px;292293}294295296297div.loading {298299height: 100vh;300301display: flex;302303justify-content: center;304305align-items: center;306307}308309`}</style>310311</Layout>312313);314315}
getVideo
makes a GET request to /api/videos/:id
to get the video with the given id. handleDownload
just downloads the video file. handleDelete
makes a DELETE request to /api/videos/:id
to delete the video with the given id.
For the final piece of the puzzle. Add the following to next.config.js
.
1module.exports = {23// ... other options45images: {67domains: ["res.cloudinary.com"],89},1011};
This is because we're using the Image component from Next.js. We need to add the cloudinary domain so that images from that domain can be optimized. Read more about this here
You can now run your application by running the following command
1npm run dev
And that's a wrap for this tutorial. You can find the full code on my Github. Please note that this is just a simple demonstration. There's a lot of ways you could optimize your application. Have a look at Google's long running operations and Cloudinary's notifications