Introduction
The modern internet is plagued with explicit videos and most children of our era have unlimited access to content on the internet. One way to protect them would be to automatically detect explicit content in videos and blur it. We can achieve this using Google's Video Intelligence API and Cloudinary. In this tutorial, we'll take a look at how to implement this using Next.js
Codesandbox
The final project can be viewed on Codesandbox.
You can find the full source code on my Github repository.
Getting started
First things first, you need to have Node.js and NPM installed. Second, working knowledge of Javascript, Node.js, and React/Next.js is a plus.
Cloudinary Credentials
We're going to be using Cloudinary for media upload and storage. It's really easy to get started and it's free as well. Get started with a free account at Cloudinary and then navigate to the Console page. Keep note of your Cloud name
API Key
and API Secret
. We'll come back to them later.
Google Cloud Project and credentials
The video intelligence API is an amazing API provided by Google via the Google Cloud Project. I'm going to walk you through how to create a new project and obtain the credentials. For those familiar with GCP, you can follow the quickstart guide. Create an account if you do not already have one then navigate to the project selector page. Once here, you will need to select an existing project or create a new project. Make sure that billing is enabled for the project that you create/select. Google APIs have a free tier with a monthly limit that you can get started with. Use the APIs with caution so as not to exceed your limits. Here's how you can confirm that billing is enabled.. The next thing we need to do is enable the Video Intelligence API so that we can use it. Navigate to the Create a new service account page and select the project you created earlier. Input an appropriate name for the service account such as blur-explicit-content-with-cloudinary
.
You can leave other options as they are and create the service account. Navigate back to the service accounts dashboard and you'll notice your newly created service account. Under the more actions button, click on Manage keys.
Click on Add key and then on Create new key
In the pop-up dialog, make sure to choose the JSON option.
Once you're done, a .json
file will be downloaded to your computer. Take note of this file's location as we will be using it later.
We're now ready to get coding
Implementation
Before anything else, we need to create a new Next.js project. Fire up your terminal/command line and run the following command.
1npx create-next-app blur-explicit-content-with-cloudinary
This will scaffold a basic project called blur-explicit-content-with-cloudinary
. You can look at the official documentation for more advanced options such as typescript. Change the directory into the new project and open it in your favorite code editor.
1cd blur-explicit-content-with-cloudinary
Cloudinary upload
Let's start by creating a few functions that will handle the upload to cloudinary and deletion of media.
We need to install the Cloudinary SDK first
1npm install --save cloudinary
Next, create a folder called lib/
at the root of your project. Create a new file called cloudinary.js
and paste the following code inside.
1// lib/cloudinary.js23// Import the v2 api and rename it to cloudinary45import { v2 as cloudinary } from 'cloudinary';67// Initialize the SDK with cloud_name, api_key, and api_secret89cloudinary.config({10 cloud_name: process.env.CLOUD_NAME,1112 api_key: process.env.API_KEY,1314 api_secret: process.env.API_SECRET,15});1617const FOLDER_NAME = 'explicit-videos/';1819export const handleCloudinaryUpload = (path, transformation = []) => {20 // Create and return a new Promise2122 return new Promise((resolve, reject) => {23 // Use the sdk to upload media2425 cloudinary.uploader.upload(26 path,2728 {29 // Folder to store video in3031 folder: FOLDER_NAME,3233 // Type of resource3435 resource_type: 'video',3637 transformation,38 },3940 (error, result) => {41 if (error) {42 // Reject the promise with an error if any4344 return reject(error);45 }4647 // Resolve the promise with a successful result4849 return resolve(result);50 }51 );52 });53};5455export const handleCloudinaryDelete = async (ids) => {56 return new Promise((resolve, reject) => {57 cloudinary.api.delete_resources(58 ids,5960 {61 resource_type: 'video',62 },6364 (error, result) => {65 if (error) {66 return reject(error);67 }6869 return resolve(result);70 }71 );72 });73};
We first import the cloudinary v2 SDK and rename it to cloudinary. This is only for readability purposes. Next, we initialize the SDK by calling the config
method on the SDK. We pass the cloud_name
, api_key
and api_secret
. We're using environment variables here, which we'll be defining in a while. We also define a folder name where we're going to be storing all of our videos. The handleCloudinaryUpload
function takes in a path to the file that we want to upload, and an optional array of transformations to run on the video. Inside this function, we're calling the uploader.upload
method on the cloudinary SDK to upload the file. Read more about the upload media api and options you can pass from the official documentation. The handleCloudinaryDelete
takes in an array of public IDs belonging to the resources we want to delete, then calls the api.delete_resources
method on the SDK. Read more about this here. Let's define those environment variables. Luckily, Next.js has inbuilt support for environment variables. This topic is covered in-depth in their docs. Create a file called .env.local
at the root of your project and paste the following inside.
1CLOUD_NAME=YOUR_CLOUD_NAME23API_KEY=YOUR_API_KEY45API_SECRET=YOUR_API_SECRET
Make sure to replace YOUR_CLOUD_NAME
YOUR_API_KEY
and YOUR_API_SECRET
with the appropriate values that we got from the cloudinary-credentials section.
And that's it for this file.
Google Video Intelligence
Let's now create the functions that will allow us to communicate with the Video Intelligence API.
First thing is to install the dependencies.
1npm install @google-cloud/video-intelligence
Create a new file under lib/
called google.js
and paste the following code inside.
1// lib/google.js23import {4 VideoIntelligenceServiceClient,5 protos,6} from '@google-cloud/video-intelligence';78const client = new VideoIntelligenceServiceClient({9 // Google cloud platform project id1011 projectId: process.env.GCP_PROJECT_ID,1213 credentials: {14 client_email: process.env.GCP_CLIENT_EMAIL,1516 private_key: process.env.GCP_PRIVATE_KEY.replace(/\\n/gm, '\n'),17 },18});1920/**2122*2324* @param {string | Uint8Array} inputContent2526* @returns {Promise<protos.google.cloud.videointelligence.v1.VideoAnnotationResults>}2728*/2930export const annotateVideoWithLabels = async (inputContent) => {31 // Grab the operation using array destructuring. The operation is the first object in the array.3233 const [operation] = await client.annotateVideo({34 // Input content3536 inputContent: inputContent,3738 // Video Intelligence features3940 features: ['EXPLICIT_CONTENT_DETECTION'],41 });4243 const [operationResult] = await operation.promise();4445 // Gets annotations for video4647 const [annotations] = operationResult.annotationResults;4849 return annotations;50};
We first import the VideoIntelligenceServiceClient
and then proceed to create a new client. The client takes in the project id and a credentials object containing the client's email and private key. There are many different ways of authenticating Google APIs. Have a read in the official documentation. We'll define the environment variables that we have just used shortly. The annotateVideoWithLabels
takes in a string or a buffer array and then calls the client's annotateVideo
method with a few options. Read more about these options in the official documentation. The most important is the features option. Here we need to tell Google what operation to run. In this case, we only pass the EXPLICIT_CONTENT_DETECTION
. Read all about this here. We then wait for the operation to complete by calling promise()
on the operation and waiting for the Promise to complete. We then get the operation result using Javascript's destructuring. To understand the structure of the resulting data, take a look at the official documentation. We then proceed to get the first item in the annotation results and return that. And now for those environment variables. Add the following to the .env.local
file we created earlier
1GCP_PROJECT_ID=YOUR_GCP_PROJECT_ID23GCP_PRIVATE_KEY=YOUR_GCP_PRIVATE_KEY45GCP_CLIENT_EMAIL=YOUR_GCP_CLIENT_EMAIL
You can find YOUR_GCP_PROJECT_ID
,YOUR_GCP_PRIVATE_KEY
and YOUR_GCP_CLIENT_EMAIL
in the .json
file that we downloaded in the google-cloud-project-and-credentials section.
Now let's move on to the slightly hard part.
API route to handle video uploads
We'll be using Next.js API routes to trigger the video upload. Read more about API routes in the official docs. Create a file called videos.js
under the pages/api/
folder and paste the following code inside.
1// pages/api/videos.js23// Next.js API route support: https://nextjs.org/docs/api-routes/introduction45import { promises as fs } from 'fs';67import { annotateVideoWithLabels } from '../../lib/google';89import {10 handleCloudinaryDelete,11 handleCloudinaryUpload,12} from '../../lib/cloudinary';1314import { createWriteStream, promises } from 'fs';1516import { get } from 'https';1718const videosController = async (req, res) => {19 // Check the incoming HTTP method. Handle the POST request method and reject the rest.2021 switch (req.method) {22 // Handle the POST request method2324 case 'POST': {25 try {26 const result = await handlePostRequest();2728 // Respond to the request with a status code 201(Created)2930 return res.status(201).json({31 message: 'Success',3233 result,34 });35 } catch (error) {36 // In case of an error, respond to the request with a status code 400(Bad Request)3738 return res.status(400).json({39 message: 'Error',4041 error,42 });43 }44 }4546 // Reject other http methods with a status code 4054748 default: {49 return res.status(405).json({ message: 'Method Not Allowed' });50 }51 }52};5354const handlePostRequest = async () => {55 // Path to the file you want to upload5657 const pathToFile = 'public/videos/explicit.mp4';5859 // Read the file using fs. This results in a Buffer6061 const file = await fs.readFile(pathToFile);6263 // Convert the file to a base64 string in preparation for analyzing the video with google's video intelligence api6465 const inputContent = file.toString('base64');6667 // Analyze the video using google video intelligence api and annotate explicit frames6869 const annotations = await annotateVideoWithLabels(inputContent);7071 // Group all adjacent frames with the same pornography likelihood7273 const likelihoodClusters = annotations.explicitAnnotation.frames.reduce(74 (prev, curr) => {75 if (76 prev.length &&77 curr.pornographyLikelihood ===78 prev[prev.length - 1][0].pornographyLikelihood79 ) {80 prev[prev.length - 1].push(curr);81 } else {82 prev.push([curr]);83 }8485 return prev;86 },8788 []89 );9091 // Get the frames with a pornography likelihood greater than 29293 const likelyFrames = likelihoodClusters.filter((cluster) =>94 cluster.some((frame) => frame.pornographyLikelihood > 2)95 );9697 // Set the start offset for the main explicit video9899 let initialStartOffset = 0;100101 // Array to hold all uploaded videos102103 const uploadResults = [];104105 // Loop through the frames with a pornography likelihood greater than 2106107 for (const likelyFrame of likelyFrames) {108 // Get the start offset of the segment109 const startOffset =110 parseInt(likelyFrame[0].timeOffset.seconds ?? 0) +111 (likelyFrame[0].timeOffset.nanos ?? 0) / 1000000000;112113 // Get the end offset of the segment114 const endOffset =115 parseInt(likelyFrame[likelyFrame.length - 1].timeOffset.seconds ?? 0) +116 (likelyFrame[likelyFrame.length - 1].timeOffset.nanos ?? 0) / 1000000000 +117 0.1;118119 let unlikelyFrameUploadResult;120121 if (startOffset != 0) {122 // This will upload the segment that is clean and doesn't need any blurring123 unlikelyFrameUploadResult = await handleCloudinaryUpload(pathToFile, [124 { offset: [initialStartOffset, startOffset] },125 ]);126 }127128 // Upload the explicit segment to cloudinary and apply a blur effect129 const uploadResult = await handleCloudinaryUpload(pathToFile, [130 { offset: [startOffset, endOffset], effect: 'blur:1500' },131 ]);132133134 // Push the upload result for the segment that doesn't need to be blurred and the segment next to it that has been blurred.135 uploadResults.push(136 {137 startOffset: initialStartOffset,138139 endOffset: startOffset,140141 uploadResult: unlikelyFrameUploadResult,142 },143144 { startOffset, endOffset, uploadResult }145 );146147 initialStartOffset = endOffset;148 }149150 // Upload the last segment to cloudinary if any151 const uploadResult = await handleCloudinaryUpload(pathToFile, [152 { start_offset: initialStartOffset },153 ]);154155 uploadResults.push({156 startOffset: initialStartOffset,157158 endOffset: null,159160 uploadResult,161 });162163 const firstFilePath = await downloadVideo(164 uploadResults[0].uploadResult.secure_url,165166 uploadResults[0].uploadResult.public_id.replace(/\//g, '-')167 );168169 const fullVideoUploadResult = await handleCloudinaryUpload(firstFilePath, [170 uploadResults.slice(1).map((video) => ({171 flags: 'splice',172173 overlay: `video:${video.uploadResult.public_id.replace(/\//g, ':')}`,174 })),175 ]);176177 await handleCloudinaryDelete([178 uploadResults.map((video) => video.uploadResult.public_id),179 ]);180181 return {182 uploadResult: fullVideoUploadResult,183 };184};185186const downloadVideo = (url, name) => {187 return new Promise((resolve, reject) => {188 try {189 get(url, async (res) => {190 const downloadPath = `public/videos/downloads`;191192 await promises.mkdir(downloadPath, { recursive: true });193194 const filePath = `${downloadPath}/${name}.mp4`;195196 const file = createWriteStream(filePath);197198 res.pipe(file);199200 res.on('error', (error) => {201 reject(error);202 });203204 file.on('error', (error) => {205 reject(error);206 });207208 file.on('finish', () => {209 file.close();210211 resolve(file.path);212 });213 });214 } catch (error) {215 reject(error);216 }217 });218};219220export default videosController;
The videosController
function is what handles the API request. We'll only handle the POST requests and return a response of status code 405 - Method not allowed for all other request types.
In the handlePostRequest
function, we first define the path to the file that we want to be analyzed. Now, for a real-world app, you would want to upload a video from the user's browser and analyze that. For the sake of simplicity, we're using a static path. The variable pathToFile
holds a path that points to the video that we want to analyze. If you'd like to use the same video I used, just clone the full project from my Github and you can find it in the public/videos
folder.
We convert the video file to a base64 string using file.toString("base64")
and then call the annotateVideoWithLabels
function that we created earlier. Google Video Intelligence annotates the video frame by frame instead of in segments. We need a way to group adjacent frames that have the same pornography likelihood. This is done in the following piece of code.
1// Group all adjacent frames with the same pornography likelyhood2const likelihoodClusters = annotations.explicitAnnotation.frames.reduce(3 (prev, curr) => {4 if (5 prev.length &&6 curr.pornographyLikelihood ===7 prev[prev.length - 1][0].pornographyLikelihood8 ) {9 prev[prev.length - 1].push(curr);10 } else {11 prev.push([curr]);12 }1314 return prev;15 },16 []17);
Once we have that we filter to only get the frames that have a pornography likelihood higher than 2. There are six levels of likelihood. See here. We only want to match the frames that are either possible, likely, or very likely. This is done in the following piece of code.
1// Get the frames with a pornogrphy likelihood greater than 22const likelyFrames = likelihoodClusters.filter((cluster) =>3 cluster.some((frame) => frame.pornographyLikelihood > 2)4);
The next thing will be to iterate through the matched frame clusters. We'll get each cluster's first frame and last frames to get the start offset and end offset respectively. We'll cut each segment from the main video and upload that to cloudinary. We also apply a blur effect to each segment. This is all done by the transformations that we pass to the handleCloudinaryUpload
function.
1// Upload the frame to cloudinary and apply a blur effect2const uploadResult = await handleCloudinaryUpload(pathToFile, [3 { offset: [startOffset, endOffset], effect: "blur:1500" },4]);
Read more about the transformations here.
We push the upload result for each segment to an array called uploadResults
so that we can join them all together later. Finally, we download the very first segment that we uploaded using the downloadVideo
function and then concatenate all the other segments to it. With that we now have a full video, we'll return the result of that.
The downloadVideo
function is self-explanatory. It just gets the file using the get
method from the https
package and saves it in the public/videos/downloads
folder. We're done with the backend. Let's move on to the front end.
The frontend
Paste the following inside pages/index.js
.
1// pages/index.js23import Head from 'next/head';45import { useState } from 'react';67export default function Home() {8 const [video, setVideo] = useState(null);910 const [loading, setLoading] = useState(false);1112 const handleUploadVideo = async () => {13 try {14 // Set loading to true1516 setLoading(true);1718 // Make a POST request to the `api/videos/` endpoint1920 const response = await fetch('/api/videos', {21 method: 'post',22 });2324 const data = await response.json();2526 // Check if the response is successful2728 if (response.status >= 200 && response.status < 300) {29 const result = data.result;3031 // Update our videos state with the results3233 setVideo(result);34 } else {35 throw data;36 }37 } catch (error) {38 // TODO: Handle error3940 console.error(error);41 } finally {42 setLoading(false);4344 // Set loading to true once a response is available45 }46 };4748 return (49 <div>50 <Head>51 <title>52 {' '}53 Blur explicit content with Google Video Intelligence and Cloudinary54 </title>5556 <meta57 name='description'58 content=' Blur explicit content with Google Video Intelligence and Cloudinary'59 />6061 <link rel='icon' href='/favicon.ico' />62 </Head>6364 <header>65 <h1>66 Blur explicit content with Google Video Intelligence and Cloudinary67 </h1>68 </header>6970 <main>71 <hr />7273 <div className='upload-wrapper'>74 <button onClick={handleUploadVideo} disabled={loading || video}>75 Upload76 </button>77 </div>7879 <hr />8081 {loading && <div className='loading'>Loading...</div>}8283 {video ? (84 [85 <div86 className='original-video-wrapper'87 key='original-video-wrapper'88 >89 <h2>Original Video</h2>9091 <video src='/videos/explicit.mp4' controls></video>92 </div>,9394 <hr key='videos-break' />,9596 <div className='blurred-video-wrapper' key='blurred-video-wrapper'>97 <h2>Blurred Video</h2>9899 <video src={video.uploadResult.secure_url} controls></video>100 </div>,101 ]102 ) : (103 <div className='no-video'>104 <p>Tap On The Upload Button To Load Video</p>105 </div>106 )}107 </main>108109 <style jsx>{`110 header {111 width: 100%;112113 min-height: 100px;114115 display: flex;116117 align-items: center;118119 justify-content: center;120 }121122 main {123 min-height: 100vh;124 }125126 main div.upload-wrapper {127 display: flex;128129 justify-content: center;130131 align-items: center;132133 padding: 20px 0;134 }135136 main div.upload-wrapper button {137 padding: 10px;138139 min-width: 200px;140141 height: 50px;142 }143144 main div.loading {145 display: flex;146147 justify-content: center;148149 align-items: center;150151 background-color: #9900ff;152153 color: #ffffff;154155 height: 150px;156 }157158 main div.original-video-wrapper {159 width: 100%;160161 display: flex;162163 flex-flow: column;164165 justify-content: center;166167 align-items: center;168 }169170 main div.original-video-wrapper video {171 width: 80%;172 }173174 main div.blurred-video-wrapper {175 width: 100%;176177 display: flex;178179 flex-flow: column;180181 justify-content: center;182183 align-items: center;184 }185186 main div.blurred-video-wrapper video {187 width: 80%;188 }189190 main div.no-video {191 background-color: #ececec;192193 min-height: 300px;194195 display: flex;196197 flex-flow: column;198199 justify-content: center;200201 align-items: center;202 }203 `}</style>204 </div>205 );206}
This is just standard React. The handleUploadVideo
function makes a POST request to the /api/videos/
endpoint that we created earlier and updates the video state with the result. For the html, we just have an upload button that will trigger the handleUploadVideo
, we also have two video elements, one for the original video and another for the blurred video. The rest is just some CSS.
With this, you're ready to run your project.
1npm run dev