Blur explicit content in videos

Eugene Musebe

Introduction

The modern internet is plagued with explicit videos and most children of our era have unlimited access to content on the internet. One way to protect them would be to automatically detect explicit content in videos and blur it. We can achieve this using Google's Video Intelligence API and Cloudinary. In this tutorial, we'll take a look at how to implement this using Next.js

Codesandbox

The final project can be viewed on Codesandbox.

You can find the full source code on my Github repository.

Getting started

First things first, you need to have Node.js and NPM installed. Second, working knowledge of Javascript, Node.js, and React/Next.js is a plus.

Cloudinary Credentials

We're going to be using Cloudinary for media upload and storage. It's really easy to get started and it's free as well. Get started with a free account at Cloudinary and then navigate to the Console page. Keep note of your Cloud name API Key and API Secret. We'll come back to them later.

Google Cloud Project and credentials

The video intelligence API is an amazing API provided by Google via the Google Cloud Project. I'm going to walk you through how to create a new project and obtain the credentials. For those familiar with GCP, you can follow the quickstart guide. Create an account if you do not already have one then navigate to the project selector page. Once here, you will need to select an existing project or create a new project. Make sure that billing is enabled for the project that you create/select. Google APIs have a free tier with a monthly limit that you can get started with. Use the APIs with caution so as not to exceed your limits. Here's how you can confirm that billing is enabled.. The next thing we need to do is enable the Video Intelligence API so that we can use it. Navigate to the Create a new service account page and select the project you created earlier. Input an appropriate name for the service account such as blur-explicit-content-with-cloudinary

.

You can leave other options as they are and create the service account. Navigate back to the service accounts dashboard and you'll notice your newly created service account. Under the more actions button, click on Manage keys.

Click on Add key and then on Create new key

In the pop-up dialog, make sure to choose the JSON option.

Once you're done, a .json file will be downloaded to your computer. Take note of this file's location as we will be using it later.

We're now ready to get coding

Implementation

Before anything else, we need to create a new Next.js project. Fire up your terminal/command line and run the following command.

1npx create-next-app blur-explicit-content-with-cloudinary

This will scaffold a basic project called blur-explicit-content-with-cloudinary. You can look at the official documentation for more advanced options such as typescript. Change the directory into the new project and open it in your favorite code editor.

1cd blur-explicit-content-with-cloudinary

Cloudinary upload

Let's start by creating a few functions that will handle the upload to cloudinary and deletion of media.

We need to install the Cloudinary SDK first

1npm install --save cloudinary

Next, create a folder called lib/ at the root of your project. Create a new file called cloudinary.js and paste the following code inside.

1// lib/cloudinary.js
2
3// Import the v2 api and rename it to cloudinary
4
5import { v2 as cloudinary } from 'cloudinary';
6
7// Initialize the SDK with cloud_name, api_key, and api_secret
8
9cloudinary.config({
10 cloud_name: process.env.CLOUD_NAME,
11
12 api_key: process.env.API_KEY,
13
14 api_secret: process.env.API_SECRET,
15});
16
17const FOLDER_NAME = 'explicit-videos/';
18
19export const handleCloudinaryUpload = (path, transformation = []) => {
20 // Create and return a new Promise
21
22 return new Promise((resolve, reject) => {
23 // Use the sdk to upload media
24
25 cloudinary.uploader.upload(
26 path,
27
28 {
29 // Folder to store video in
30
31 folder: FOLDER_NAME,
32
33 // Type of resource
34
35 resource_type: 'video',
36
37 transformation,
38 },
39
40 (error, result) => {
41 if (error) {
42 // Reject the promise with an error if any
43
44 return reject(error);
45 }
46
47 // Resolve the promise with a successful result
48
49 return resolve(result);
50 }
51 );
52 });
53};
54
55export const handleCloudinaryDelete = async (ids) => {
56 return new Promise((resolve, reject) => {
57 cloudinary.api.delete_resources(
58 ids,
59
60 {
61 resource_type: 'video',
62 },
63
64 (error, result) => {
65 if (error) {
66 return reject(error);
67 }
68
69 return resolve(result);
70 }
71 );
72 });
73};

We first import the cloudinary v2 SDK and rename it to cloudinary. This is only for readability purposes. Next, we initialize the SDK by calling the config method on the SDK. We pass the cloud_name, api_key and api_secret. We're using environment variables here, which we'll be defining in a while. We also define a folder name where we're going to be storing all of our videos. The handleCloudinaryUpload function takes in a path to the file that we want to upload, and an optional array of transformations to run on the video. Inside this function, we're calling the uploader.upload method on the cloudinary SDK to upload the file. Read more about the upload media api and options you can pass from the official documentation. The handleCloudinaryDelete takes in an array of public IDs belonging to the resources we want to delete, then calls the api.delete_resources method on the SDK. Read more about this here. Let's define those environment variables. Luckily, Next.js has inbuilt support for environment variables. This topic is covered in-depth in their docs. Create a file called .env.local at the root of your project and paste the following inside.

1CLOUD_NAME=YOUR_CLOUD_NAME
2
3API_KEY=YOUR_API_KEY
4
5API_SECRET=YOUR_API_SECRET

Make sure to replace YOUR_CLOUD_NAME YOUR_API_KEY and YOUR_API_SECRET with the appropriate values that we got from the cloudinary-credentials section.

And that's it for this file.

Google Video Intelligence

Let's now create the functions that will allow us to communicate with the Video Intelligence API.

First thing is to install the dependencies.

1npm install @google-cloud/video-intelligence

Create a new file under lib/ called google.js and paste the following code inside.

1// lib/google.js
2
3import {
4 VideoIntelligenceServiceClient,
5 protos,
6} from '@google-cloud/video-intelligence';
7
8const client = new VideoIntelligenceServiceClient({
9 // Google cloud platform project id
10
11 projectId: process.env.GCP_PROJECT_ID,
12
13 credentials: {
14 client_email: process.env.GCP_CLIENT_EMAIL,
15
16 private_key: process.env.GCP_PRIVATE_KEY.replace(/\\n/gm, '\n'),
17 },
18});
19
20/**
21
22*
23
24* @param {string | Uint8Array} inputContent
25
26* @returns {Promise<protos.google.cloud.videointelligence.v1.VideoAnnotationResults>}
27
28*/
29
30export const annotateVideoWithLabels = async (inputContent) => {
31 // Grab the operation using array destructuring. The operation is the first object in the array.
32
33 const [operation] = await client.annotateVideo({
34 // Input content
35
36 inputContent: inputContent,
37
38 // Video Intelligence features
39
40 features: ['EXPLICIT_CONTENT_DETECTION'],
41 });
42
43 const [operationResult] = await operation.promise();
44
45 // Gets annotations for video
46
47 const [annotations] = operationResult.annotationResults;
48
49 return annotations;
50};

We first import the VideoIntelligenceServiceClient and then proceed to create a new client. The client takes in the project id and a credentials object containing the client's email and private key. There are many different ways of authenticating Google APIs. Have a read in the official documentation. We'll define the environment variables that we have just used shortly. The annotateVideoWithLabels takes in a string or a buffer array and then calls the client's annotateVideo method with a few options. Read more about these options in the official documentation. The most important is the features option. Here we need to tell Google what operation to run. In this case, we only pass the EXPLICIT_CONTENT_DETECTION. Read all about this here. We then wait for the operation to complete by calling promise() on the operation and waiting for the Promise to complete. We then get the operation result using Javascript's destructuring. To understand the structure of the resulting data, take a look at the official documentation. We then proceed to get the first item in the annotation results and return that. And now for those environment variables. Add the following to the .env.local file we created earlier

1GCP_PROJECT_ID=YOUR_GCP_PROJECT_ID
2
3GCP_PRIVATE_KEY=YOUR_GCP_PRIVATE_KEY
4
5GCP_CLIENT_EMAIL=YOUR_GCP_CLIENT_EMAIL

You can find YOUR_GCP_PROJECT_ID,YOUR_GCP_PRIVATE_KEY and YOUR_GCP_CLIENT_EMAIL in the .json file that we downloaded in the google-cloud-project-and-credentials section.

Now let's move on to the slightly hard part.

API route to handle video uploads

We'll be using Next.js API routes to trigger the video upload. Read more about API routes in the official docs. Create a file called videos.js under the pages/api/ folder and paste the following code inside.

1// pages/api/videos.js
2
3// Next.js API route support: https://nextjs.org/docs/api-routes/introduction
4
5import { promises as fs } from 'fs';
6
7import { annotateVideoWithLabels } from '../../lib/google';
8
9import {
10 handleCloudinaryDelete,
11 handleCloudinaryUpload,
12} from '../../lib/cloudinary';
13
14import { createWriteStream, promises } from 'fs';
15
16import { get } from 'https';
17
18const videosController = async (req, res) => {
19 // Check the incoming HTTP method. Handle the POST request method and reject the rest.
20
21 switch (req.method) {
22 // Handle the POST request method
23
24 case 'POST': {
25 try {
26 const result = await handlePostRequest();
27
28 // Respond to the request with a status code 201(Created)
29
30 return res.status(201).json({
31 message: 'Success',
32
33 result,
34 });
35 } catch (error) {
36 // In case of an error, respond to the request with a status code 400(Bad Request)
37
38 return res.status(400).json({
39 message: 'Error',
40
41 error,
42 });
43 }
44 }
45
46 // Reject other http methods with a status code 405
47
48 default: {
49 return res.status(405).json({ message: 'Method Not Allowed' });
50 }
51 }
52};
53
54const handlePostRequest = async () => {
55 // Path to the file you want to upload
56
57 const pathToFile = 'public/videos/explicit.mp4';
58
59 // Read the file using fs. This results in a Buffer
60
61 const file = await fs.readFile(pathToFile);
62
63 // Convert the file to a base64 string in preparation for analyzing the video with google's video intelligence api
64
65 const inputContent = file.toString('base64');
66
67 // Analyze the video using google video intelligence api and annotate explicit frames
68
69 const annotations = await annotateVideoWithLabels(inputContent);
70
71 // Group all adjacent frames with the same pornography likelihood
72
73 const likelihoodClusters = annotations.explicitAnnotation.frames.reduce(
74 (prev, curr) => {
75 if (
76 prev.length &&
77 curr.pornographyLikelihood ===
78 prev[prev.length - 1][0].pornographyLikelihood
79 ) {
80 prev[prev.length - 1].push(curr);
81 } else {
82 prev.push([curr]);
83 }
84
85 return prev;
86 },
87
88 []
89 );
90
91 // Get the frames with a pornography likelihood greater than 2
92
93 const likelyFrames = likelihoodClusters.filter((cluster) =>
94 cluster.some((frame) => frame.pornographyLikelihood > 2)
95 );
96
97 // Set the start offset for the main explicit video
98
99 let initialStartOffset = 0;
100
101 // Array to hold all uploaded videos
102
103 const uploadResults = [];
104
105 // Loop through the frames with a pornography likelihood greater than 2
106
107 for (const likelyFrame of likelyFrames) {
108 // Get the start offset of the segment
109 const startOffset =
110 parseInt(likelyFrame[0].timeOffset.seconds ?? 0) +
111 (likelyFrame[0].timeOffset.nanos ?? 0) / 1000000000;
112
113 // Get the end offset of the segment
114 const endOffset =
115 parseInt(likelyFrame[likelyFrame.length - 1].timeOffset.seconds ?? 0) +
116 (likelyFrame[likelyFrame.length - 1].timeOffset.nanos ?? 0) / 1000000000 +
117 0.1;
118
119 let unlikelyFrameUploadResult;
120
121 if (startOffset != 0) {
122 // This will upload the segment that is clean and doesn't need any blurring
123 unlikelyFrameUploadResult = await handleCloudinaryUpload(pathToFile, [
124 { offset: [initialStartOffset, startOffset] },
125 ]);
126 }
127
128 // Upload the explicit segment to cloudinary and apply a blur effect
129 const uploadResult = await handleCloudinaryUpload(pathToFile, [
130 { offset: [startOffset, endOffset], effect: 'blur:1500' },
131 ]);
132
133
134 // Push the upload result for the segment that doesn't need to be blurred and the segment next to it that has been blurred.
135 uploadResults.push(
136 {
137 startOffset: initialStartOffset,
138
139 endOffset: startOffset,
140
141 uploadResult: unlikelyFrameUploadResult,
142 },
143
144 { startOffset, endOffset, uploadResult }
145 );
146
147 initialStartOffset = endOffset;
148 }
149
150 // Upload the last segment to cloudinary if any
151 const uploadResult = await handleCloudinaryUpload(pathToFile, [
152 { start_offset: initialStartOffset },
153 ]);
154
155 uploadResults.push({
156 startOffset: initialStartOffset,
157
158 endOffset: null,
159
160 uploadResult,
161 });
162
163 const firstFilePath = await downloadVideo(
164 uploadResults[0].uploadResult.secure_url,
165
166 uploadResults[0].uploadResult.public_id.replace(/\//g, '-')
167 );
168
169 const fullVideoUploadResult = await handleCloudinaryUpload(firstFilePath, [
170 uploadResults.slice(1).map((video) => ({
171 flags: 'splice',
172
173 overlay: `video:${video.uploadResult.public_id.replace(/\//g, ':')}`,
174 })),
175 ]);
176
177 await handleCloudinaryDelete([
178 uploadResults.map((video) => video.uploadResult.public_id),
179 ]);
180
181 return {
182 uploadResult: fullVideoUploadResult,
183 };
184};
185
186const downloadVideo = (url, name) => {
187 return new Promise((resolve, reject) => {
188 try {
189 get(url, async (res) => {
190 const downloadPath = `public/videos/downloads`;
191
192 await promises.mkdir(downloadPath, { recursive: true });
193
194 const filePath = `${downloadPath}/${name}.mp4`;
195
196 const file = createWriteStream(filePath);
197
198 res.pipe(file);
199
200 res.on('error', (error) => {
201 reject(error);
202 });
203
204 file.on('error', (error) => {
205 reject(error);
206 });
207
208 file.on('finish', () => {
209 file.close();
210
211 resolve(file.path);
212 });
213 });
214 } catch (error) {
215 reject(error);
216 }
217 });
218};
219
220export default videosController;

The videosController function is what handles the API request. We'll only handle the POST requests and return a response of status code 405 - Method not allowed for all other request types.

In the handlePostRequest function, we first define the path to the file that we want to be analyzed. Now, for a real-world app, you would want to upload a video from the user's browser and analyze that. For the sake of simplicity, we're using a static path. The variable pathToFile holds a path that points to the video that we want to analyze. If you'd like to use the same video I used, just clone the full project from my Github and you can find it in the public/videos folder.

We convert the video file to a base64 string using file.toString("base64") and then call the annotateVideoWithLabels function that we created earlier. Google Video Intelligence annotates the video frame by frame instead of in segments. We need a way to group adjacent frames that have the same pornography likelihood. This is done in the following piece of code.

1// Group all adjacent frames with the same pornography likelyhood
2const likelihoodClusters = annotations.explicitAnnotation.frames.reduce(
3 (prev, curr) => {
4 if (
5 prev.length &&
6 curr.pornographyLikelihood ===
7 prev[prev.length - 1][0].pornographyLikelihood
8 ) {
9 prev[prev.length - 1].push(curr);
10 } else {
11 prev.push([curr]);
12 }
13
14 return prev;
15 },
16 []
17);

Once we have that we filter to only get the frames that have a pornography likelihood higher than 2. There are six levels of likelihood. See here. We only want to match the frames that are either possible, likely, or very likely. This is done in the following piece of code.

1// Get the frames with a pornogrphy likelihood greater than 2
2const likelyFrames = likelihoodClusters.filter((cluster) =>
3 cluster.some((frame) => frame.pornographyLikelihood > 2)
4);

The next thing will be to iterate through the matched frame clusters. We'll get each cluster's first frame and last frames to get the start offset and end offset respectively. We'll cut each segment from the main video and upload that to cloudinary. We also apply a blur effect to each segment. This is all done by the transformations that we pass to the handleCloudinaryUpload function.

1// Upload the frame to cloudinary and apply a blur effect
2const uploadResult = await handleCloudinaryUpload(pathToFile, [
3 { offset: [startOffset, endOffset], effect: "blur:1500" },
4]);

Read more about the transformations here.

We push the upload result for each segment to an array called uploadResults so that we can join them all together later. Finally, we download the very first segment that we uploaded using the downloadVideo function and then concatenate all the other segments to it. With that we now have a full video, we'll return the result of that.

The downloadVideo function is self-explanatory. It just gets the file using the get method from the https package and saves it in the public/videos/downloads folder. We're done with the backend. Let's move on to the front end.

The frontend

Paste the following inside pages/index.js.

1// pages/index.js
2
3import Head from 'next/head';
4
5import { useState } from 'react';
6
7export default function Home() {
8 const [video, setVideo] = useState(null);
9
10 const [loading, setLoading] = useState(false);
11
12 const handleUploadVideo = async () => {
13 try {
14 // Set loading to true
15
16 setLoading(true);
17
18 // Make a POST request to the `api/videos/` endpoint
19
20 const response = await fetch('/api/videos', {
21 method: 'post',
22 });
23
24 const data = await response.json();
25
26 // Check if the response is successful
27
28 if (response.status >= 200 && response.status < 300) {
29 const result = data.result;
30
31 // Update our videos state with the results
32
33 setVideo(result);
34 } else {
35 throw data;
36 }
37 } catch (error) {
38 // TODO: Handle error
39
40 console.error(error);
41 } finally {
42 setLoading(false);
43
44 // Set loading to true once a response is available
45 }
46 };
47
48 return (
49 <div>
50 <Head>
51 <title>
52 {' '}
53 Blur explicit content with Google Video Intelligence and Cloudinary
54 </title>
55
56 <meta
57 name='description'
58 content=' Blur explicit content with Google Video Intelligence and Cloudinary'
59 />
60
61 <link rel='icon' href='/favicon.ico' />
62 </Head>
63
64 <header>
65 <h1>
66 Blur explicit content with Google Video Intelligence and Cloudinary
67 </h1>
68 </header>
69
70 <main>
71 <hr />
72
73 <div className='upload-wrapper'>
74 <button onClick={handleUploadVideo} disabled={loading || video}>
75 Upload
76 </button>
77 </div>
78
79 <hr />
80
81 {loading && <div className='loading'>Loading...</div>}
82
83 {video ? (
84 [
85 <div
86 className='original-video-wrapper'
87 key='original-video-wrapper'
88 >
89 <h2>Original Video</h2>
90
91 <video src='/videos/explicit.mp4' controls></video>
92 </div>,
93
94 <hr key='videos-break' />,
95
96 <div className='blurred-video-wrapper' key='blurred-video-wrapper'>
97 <h2>Blurred Video</h2>
98
99 <video src={video.uploadResult.secure_url} controls></video>
100 </div>,
101 ]
102 ) : (
103 <div className='no-video'>
104 <p>Tap On The Upload Button To Load Video</p>
105 </div>
106 )}
107 </main>
108
109 <style jsx>{`
110 header {
111 width: 100%;
112
113 min-height: 100px;
114
115 display: flex;
116
117 align-items: center;
118
119 justify-content: center;
120 }
121
122 main {
123 min-height: 100vh;
124 }
125
126 main div.upload-wrapper {
127 display: flex;
128
129 justify-content: center;
130
131 align-items: center;
132
133 padding: 20px 0;
134 }
135
136 main div.upload-wrapper button {
137 padding: 10px;
138
139 min-width: 200px;
140
141 height: 50px;
142 }
143
144 main div.loading {
145 display: flex;
146
147 justify-content: center;
148
149 align-items: center;
150
151 background-color: #9900ff;
152
153 color: #ffffff;
154
155 height: 150px;
156 }
157
158 main div.original-video-wrapper {
159 width: 100%;
160
161 display: flex;
162
163 flex-flow: column;
164
165 justify-content: center;
166
167 align-items: center;
168 }
169
170 main div.original-video-wrapper video {
171 width: 80%;
172 }
173
174 main div.blurred-video-wrapper {
175 width: 100%;
176
177 display: flex;
178
179 flex-flow: column;
180
181 justify-content: center;
182
183 align-items: center;
184 }
185
186 main div.blurred-video-wrapper video {
187 width: 80%;
188 }
189
190 main div.no-video {
191 background-color: #ececec;
192
193 min-height: 300px;
194
195 display: flex;
196
197 flex-flow: column;
198
199 justify-content: center;
200
201 align-items: center;
202 }
203 `}</style>
204 </div>
205 );
206}

This is just standard React. The handleUploadVideo function makes a POST request to the /api/videos/ endpoint that we created earlier and updates the video state with the result. For the html, we just have an upload button that will trigger the handleUploadVideo, we also have two video elements, one for the original video and another for the blurred video. The rest is just some CSS.

With this, you're ready to run your project.

1npm run dev

Eugene Musebe

Software Developer

I’m a full-stack software developer, content creator, and tech community builder based in Nairobi, Kenya. I am addicted to learning new technologies and loves working with like-minded people.