Detecting face landmarks and add filters overlays

Eugene Musebe

Introduction

We've all seen those cool Snapchat and Instagram filters that usually go over a person's mouth or nose or eyes. This is made possible by machine learning and some clever image positioning. In this tutorial, we'll be using face-api.js, to detect face landmarks and cloudinary to overlay images/filters over detected landmarks. We're going to be building our application using next.js.

Codesandbox

The final project can be viewed on Codesandbox.

You can find the full source code on my Github repository.

The setup

This is majorly a javascript project. Working knowledge of javascript is required. We'll also be using React.js and a bit of Node.js. Knowledge of the two is recommended but not required. In addition, we have a machine learning(ML) aspect. For this basic tutorial, you won't need any ML or Tensors knowledge. However, if you would like to train your own models or expand the functionality, you need to be conversant with the field.

Cloudinary is a service that allows developers to store different types of media, manipulate and transform the media and also optimize its delivery.

face-api.js is a JavaScript API for face detection and face recognition in the browser implemented on top of the tensorflow.js core API.

Next.js is a react framework that allows for production-grade features such as hybrid static & server rendering, file system routing, incremental static generation, and others.

Let's start by creating a new Next.js project. This is fairly easy to do using the Next.js CLI app. Open your terminal in your desired folder and run the following command.

1npx create-next-app face-landmark-filters

This scaffolds a new project called face-landmark-filters. You can change the name to any name you'd like. Change the directory into the new face-landmark-filters folder and open it in your favorite code editor.

1cd face-landmark-filters

Cloudinary account and credentials

It's quite easy to get started with a free cloudinary account if you do not already have one. Fire up your browser and go to cloudinary. Create an account if you don't have one then proceed to log in. Over at the console page you'll find the credentials you need.

Open your code editor and create a new file called .local.env at the root of your project. We're going to be putting our environment variables in this file. In case you're not familiar with environment variables, they allow us to abstract sensitive keys and secrets from our code. Read about support for environment variables in Next.js from the documentation.

Paste the following inside your .env.local file.

1CLOUD_NAME=YOUR_CLOUD_NAME
2
3API_KEY=YOUR_API_KEY
4
5API_SECRET=YOUR_API_SECRET

Replace YOUR_CLOUD_NAME, YOUR_API_KEY and YOUR_API_SECRET with the cloud name, api key and api secret values that we got from the cloudinary console page.

Libraries and dependencies

We're going to need a few node packages for this project.

The reason why we're using @vladmandic/face-api instead of face-api.js is because face-api.js doesn't seem to be actively maintained and is not compatible with newer versions of tensorflow.js.

@tensorflow/tfjs-node speeds up face and landmark detection using the ML models. It's not required but nice to have the speed boost.

canvas will patch the Node.js environment to have support for graphical functions. It patches the HTMLImageElement and HTMLCanvasElement.

formidable will be responsible for parsing any form data that we receive in our api routes.

Run the following command to install all of the above

1npm install cloudinary formidable @vladmandic/face-api @tensorflow/tfjs-node canvas

Machine Learning models

face-api.js requires some pre-trained machine learning models that will allow TensorFlow to detect faces as well as facial landmarks. As I mentioned earlier, if you'd like to train your own models or extend the functionality, you need to have knowledge of ML and deep learning. The creator of face-api.js was generous enough to provide some pre-trained models along with the library. Download the models at https://github.com/vladmandic/face-api/tree/master/model and save them in your project inside the public/models folder. You can also get the full source code for this tutorial on my github with all the models already added.

Filter images

We also need the images that we are going to be using as filters. These need to be PNGs with a transparent background. For ease, you can download the images from https://github.com/newtonmunene99/face-landmark-filters/blob/master/public/images and save them inside the public/images folder. Again, you can also get the full source code for this tutorial on my github with all the images already added.

Getting started

Create a folder named lib at the root of your project. Inside this folder create a file called parse-form.js. Paste the following code inside lib/parse-form.js

1// lib/parse-form.js
2
3
4
5import { IncomingForm, Files, Fields } from "formidable";
6
7
8
9/**
10
11* Parses the incoming form data.
12
13*
14
15* @param {NextApiRequest} req The incoming request object
16
17* @returns {Promise<{fields:Fields;files:Files;}>} The parsed form data
18
19*/
20
21export const parseForm = (req) => {
22
23return new Promise((resolve, reject) => {
24
25// Create a new incoming form
26
27const form = new IncomingForm({ keepExtensions: true, multiples: true });
28
29
30
31form.parse(req, (error, fields, files) => {
32
33if (error) {
34
35return reject(error);
36
37}
38
39
40
41return resolve({ fields, files });
42
43});
44
45});
46
47};

This file exports a function called parseForm. This will use formidable to parse any requests that we receive in our api routes with the `multipart/form-data content-type header. Read about the specifics in the formidable docs.


Create another file inside the lib folder and name it constants.js. Paste the following code inside lib/constants.js

1// lib/constants.js
2
3
4
5/**
6
7* @typedef {Object} Preset
8
9* @property {number} widthOffset
10
11* @property {number} heightOffset
12
13*/
14
15
16
17/**
18
19* @typedef {Object} Filter
20
21* @property {string} publicId
22
23* @property {string} path
24
25* @property {string} landmark
26
27* @property {Preset} presets
28
29*/
30
31
32
33/**
34
35* Cloudinary folder where images will be uploaded to
36
37*/
38
39export const CLOUDINARY_FOLDER_NAME = "face-landmark-filters/";
40
41
42
43/**
44
45* Cloudinary folder where filters will be uploaded to
46
47*/
48
49export const FILTERS_FOLDER_NAME = "filters/";
50
51
52
53/**
54
55* Facial landmarks
56
57*/
58
59export const LANDMARKS = {
60
61LEFT_EYE: "left_eye",
62
63RIGHT_EYE: "right_eye",
64
65NOSE: "nose",
66
67MOUTH: "mouth",
68
69};
70
71
72
73/**
74
75* Filters that we can apply to the image
76
77* @type {Filter[]}
78
79*/
80
81export const FILTERS = [
82
83{
84
85publicId: "snapchat_nose",
86
87path: "public/images/snapchat_nose.png",
88
89landmark: LANDMARKS.NOSE,
90
91presets: {
92
93widthOffset: 50,
94
95heightOffset: 50,
96
97},
98
99},
100
101{
102
103publicId: "clown_nose",
104
105path: "public/images/clown_nose.png",
106
107landmark: LANDMARKS.NOSE,
108
109presets: {
110
111widthOffset: 30,
112
113heightOffset: 30,
114
115},
116
117},
118
119{
120
121publicId: "snapchat_tongue",
122
123path: "public/images/tongue.png",
124
125landmark: LANDMARKS.MOUTH,
126
127presets: {
128
129widthOffset: 20,
130
131heightOffset: 50,
132
133},
134
135},
136
137];

These are just a few variables that we'll be using in our project. In the FILTERS array, we have all the filters that we're going to be able to use. We define a public id for each filter, its path in the file system, the facial landmark over which we can apply the filter and a few presets that we'll use when applying the filter. Let me explain a bit more about the presets. So say for example we have a nose filter that is a bit small or large in pixel size. We need to make it a bit smaller or bigger so that it covers the person's nose perfectly so we define a width and height offset that we can use. To make it smaller you can use a negative value and make it bigger we use a positive value.

With that said, if you want to have more filters just store the filter images inside of the public/images folder then add them to the FILTERS array. Make sure the publicId is unique for every filter.


Create a new file under the lib folder and name it cloudinary.js. Paste the following inside.

1// lib/cloudinary.js
2
3
4
5// Import the v2 api and rename it to cloudinary
6
7import { v2 as cloudinary, TransformationOptions } from "cloudinary";
8
9import { CLOUDINARY_FOLDER_NAME } from "./constants";
10
11
12
13// Initialize the SDK with cloud_name, api_key, and api_secret
14
15cloudinary.config({
16
17cloud_name: process.env.CLOUD_NAME,
18
19api_key: process.env.API_KEY,
20
21api_secret: process.env.API_SECRET,
22
23});
24
25
26
27/**
28
29* Get cloudinary uploads
30
31* @param {string} folder Folder name
32
33* @returns {Promise}
34
35*/
36
37export const handleGetCloudinaryUploads = (folder = CLOUDINARY_FOLDER_NAME) => {
38
39return cloudinary.api.resources({
40
41type: "upload",
42
43prefix: folder,
44
45resource_type: "image",
46
47});
48
49};
50
51
52
53/**
54
55* @typedef {Object} Resource
56
57* @property {string | Buffer} file
58
59* @property {string} publicId
60
61* @property {boolean} inFolder
62
63* @property {string} folder
64
65* @property {TransformationOptions} transformation
66
67*
68
69*/
70
71
72
73/**
74
75* Uploads an image to cloudinary and returns the upload result
76
77*
78
79* @param {Resource} resource
80
81*/
82
83export const handleCloudinaryUpload = ({
84
85file,
86
87publicId,
88
89transformation,
90
91folder = CLOUDINARY_FOLDER_NAME,
92
93inFolder = false,
94
95}) => {
96
97return cloudinary.uploader.upload(file, {
98
99// Folder to store the image in
100
101folder: inFolder ? folder : null,
102
103// Public id of image.
104
105public_id: publicId,
106
107// Type of resource
108
109resource_type: "auto",
110
111// Transformation to apply to the video
112
113transformation,
114
115});
116
117};
118
119
120
121/**
122
123* Deletes resources from cloudinary. Takes in an array of public ids
124
125* @param {string[]} ids
126
127*/
128
129export const handleCloudinaryDelete = (ids) => {
130
131return cloudinary.api.delete_resources(ids, {
132
133resource_type: "image",
134
135});
136
137};

This file contains all the functions we need to communicate with cloudinary. At the top, we import the v2 API from the SDK and rename it to cloudinary for readability purposes. We also import the CLOUDINARY_FOLDER_NAME variable from the lib/constants.js file that we created earlier. We then proceed to initialize the SDK by calling the config method on the api and passing to it the cloud name, api key, and api secret. Remember we defined these as environment variables in our .env.local file earlier. The handleGetCloudinaryUploads function calls the api.resources method on the api to get all resources that have been uploaded to a specific folder. Read about this in the cloudinary admin api docs. handleCloudinaryUpload calls the uploader.upload method to upload a file to cloudinary. It takes in a resource object which contains the file we want to upload, an optional publicId, transformation object, whether or not to place the file inside a folder, and a folder name. Read more about the upload method in the cloudinary upload docs. handleCloudinaryDelete passes an array of public IDs to the api.delete_resources method for deletion. Read all about this method in the cloudinary admin api docs


Create a new file under the lib folder and name it face-api.js. Paste the following inside lib/face-api.js.

1// lib/face-api.js
2
3import "@tensorflow/tfjs-node";
4
5import { Canvas, Image, ImageData, loadImage } from "canvas";
6
7import { env, nets, detectAllFaces, Point } from "@vladmandic/face-api";
8
9
10env.monkeyPatch({ Canvas, Image, ImageData });
11
12
13
14let modelsLoaded = false;
15
16
17
18const loadModels = async () => {
19
20if (modelsLoaded) {
21
22return;
23
24}
25
26
27
28await nets.ssdMobilenetv1.loadFromDisk("public/models");
29
30await nets.faceLandmark68Net.loadFromDisk("public/models");
31
32modelsLoaded = true;
33
34};
35
36
37
38/**
39
40* Detect all faces in an image and their landmarks
41
42* @param {string} imagePath
43
44*/
45
46export const detectFaceLandmarks = async (imagePath) => {
47
48await loadModels();
49
50
51
52const image = await loadImage(imagePath);
53
54
55
56const faces = await detectAllFaces(image).withFaceLandmarks();
57
58
59
60return faces;
61
62};
63
64
65
66/**
67
68* Gets the approximate center of the landmark
69
70* @param {Point[]} landmark
71
72*/
73
74export const getCenterOfLandmark = (landmark) => {
75
76const coordinates = landmark.map((xy) => [xy.x, xy.y]);
77
78
79
80const x = coordinates.map((xy) => xy[0]);
81
82const y = coordinates.map((xy) => xy[1]);
83
84
85
86const centerX = (Math.min(...x) + Math.max(...x)) / 2;
87
88const centerY = (Math.min(...y) + Math.max(...y)) / 2;
89
90
91
92return { x: centerX, y: centerY };
93
94};
95
96
97
98/**
99
100* Get the approximate height and width of the landmark.
101
102* @param {Point[]} landmark
103
104* @returns
105
106*/
107
108export const getHeightWidthOfLandmark = (landmark) => {
109
110const minX = Math.min(...landmark.map((xy) => xy.x));
111
112const maxX = Math.max(...landmark.map((xy) => xy.x));
113
114
115
116const minY = Math.min(...landmark.map((xy) => xy.y));
117
118const maxY = Math.max(...landmark.map((xy) => xy.y));
119
120
121
122return {
123
124width: maxX - minX,
125
126height: maxY - minY,
127
128};
129
130};

This file contains all the code we need to detect faces and their landmarks. At the top we first patch the node environment so that our face-api library can be able to use the HTMLImageElement and the HTMLCanvasElement. We then have a loadModels function which loads our pretrained models. To avoid having to load our models every time we make an API call, we have a modelsLoaded variable that we check to see if we have already loaded the models into memory. For a normal node project, you can just load your models once when you start up your application, but since we're using Next.js and severless functions for the backend, we want to check that everytime. Read more about loading models here detectFaceLandmarks takes in an image path, loads the ML models and then creates an Image object using the loadImage function from the canvas package and then detects all faces with landmarks and return the faces. Read more about detecting faces and landmarks here.

getCenterOfLandmark takes in an array of x and y coordinates then uses some simple mathematics to estimate the center of the points. Let's use an eye as an example.

12 3
2
31 7 4
4
55 6

Let's imagine that the numbers 1,2,3,4,5,6 above represent the outline for an eye. We want to get the center which is represented by the number 7.

getHeightWidthOfLandmark get the approximate height and width of a landmark. It also takes in an array of x and y coordinates. Using the same example of an eye as before.

12 3
2
31 7 4
4
55 6

To get the approximate width, we take the smallest x coordinate which is represented by the number 1, and the largest which is represented by the number 4 then get the difference. Do the same with the height.


Let's move on to our API routes. Create a folder called filters inside pages/api. Create a new file called index.js under pages/api/filters. This file will handle calls to the /api/filters endpoint. If you are not familiar with API routes in Next.js, I highly recommend you read the docs before proceeding. Paste the following code inside pages/api/filters/index.js.

1// pages/api/filters/index.js
2
3
4
5import { NextApiHandler, NextApiRequest, NextApiResponse } from "next";
6
7import {
8
9handleCloudinaryUpload,
10
11handleGetCloudinaryUploads,
12
13} from "../../../lib/cloudinary";
14
15import {
16
17CLOUDINARY_FOLDER_NAME,
18
19FILTERS,
20
21FILTERS_FOLDER_NAME,
22
23} from "../../../lib/constants";
24
25
26
27/**
28
29* @type {NextApiHandler}
30
31* @param {NextApiRequest} req
32
33* @param {NextApiResponse} res
34
35*/
36
37export default async function handler(req, res) {
38
39const { method } = req;
40
41
42
43switch (method) {
44
45case "GET": {
46
47try {
48
49const result = await handleGetRequest();
50
51
52
53return res.status(200).json({ message: "Success", result });
54
55} catch (error) {
56
57return res.status(400).json({ message: "Error", error });
58
59}
60
61}
62
63
64
65default: {
66
67return res.status(405).json({ message: "Method not allowed" });
68
69}
70
71}
72
73}
74
75
76
77const handleGetRequest = async () => {
78
79const filters = [];
80
81
82
83const existingFilters = await handleGetCloudinaryUploads(
84
85`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}`
86
87);
88
89
90
91filters.push(...existingFilters.resources);
92
93
94
95const nonExistingFilters = FILTERS.filter((filter) => {
96
97const existingFilter = existingFilters.resources.find((resource) => {
98
99return (
100
101resource.public_id ===
102
103`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`
104
105);
106
107});
108
109
110
111return existingFilter === undefined;
112
113});
114
115
116
117for (const filter of nonExistingFilters) {
118
119const uploadResult = await handleCloudinaryUpload({
120
121file: filter.path,
122
123publicId: filter.publicId,
124
125inFolder: true,
126
127folder: `${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}`,
128
129});
130
131
132
133filters.push(uploadResult);
134
135}
136
137
138
139return filters;
140
141};

I am assuming that you're now already familiar with the structure of a Next.js API route. It's usually a default export function that takes in the incoming request object and the outgoing response object. In our handler function, we use a switch statement to differentiate among the different HTTP request methods. On this endpoint, api/filters, we only want to handle GET requests.

The handleGetReqeust function gets all filters that have been uploaded to cloudinary by calling the handleGetCloudinaryUploads and passing in a folder. In this case, our folder will resolve to face-landmark-filters/filters/. We then compare with the filters that we have defined in the FILTERS array that we defined earlier inside lib/constants.js. If the filter exists in the FILTER array but not on cloudinary we push it into an array and then upload all filters in the array to cloudinary. We then return all filters that have been uploaded to the face-landmark-filters/filters/ folder on cloudinary.


Create a folder called images inside pages/api. Create a new file called index.js under pages/api/images. This file will handle calls to the /api/images endpoint. Paste the following code inside pages/api/images/index.js.

1// pages/api/images/index.js
2
3import { NextApiHandler, NextApiRequest, NextApiResponse } from "next";
4
5import {
6
7handleCloudinaryUpload,
8
9handleGetCloudinaryUploads,
10
11} from "../../../lib/cloudinary";
12
13import {
14
15CLOUDINARY_FOLDER_NAME,
16
17FILTERS,
18
19FILTERS_FOLDER_NAME,
20
21} from "../../../lib/constants";
22
23import {
24
25detectFaceLandmarks,
26
27getCenterOfLandmark,
28
29getHeightWidthOfLandmark,
30
31} from "../../../lib/face-api";
32
33import { parseForm } from "../../../lib/parse-form";
34
35
36
37export const config = {
38
39api: {
40
41bodyParser: false,
42
43},
44
45};
46
47
48
49/**
50
51* @type {NextApiHandler}
52
53* @param {NextApiRequest} req
54
55* @param {NextApiResponse} res
56
57*/
58
59export default async function handler(req, res) {
60
61const { method } = req;
62
63
64
65switch (method) {
66
67case "GET": {
68
69try {
70
71const result = await handleGetRequest();
72
73
74
75return res.status(200).json({ message: "Success", result });
76
77} catch (error) {
78
79console.error(error);
80
81return res.status(400).json({ message: "Error", error });
82
83}
84
85}
86
87
88
89case "POST": {
90
91try {
92
93const result = await handlePostRequest(req);
94
95
96
97return res.status(201).json({ message: "Success", result });
98
99} catch (error) {
100
101console.error(error);
102
103return res.status(400).json({ message: "Error", error });
104
105}
106
107}
108
109
110
111default: {
112
113return res.status(405).json({ message: "Method not allowed" });
114
115}
116
117}
118
119}
120
121
122
123const handleGetRequest = async () => {
124
125const result = await handleGetCloudinaryUploads();
126
127
128
129result.resources = result.resources.filter(
130
131(resource) =>
132
133!resource.public_id.startsWith(
134
135`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}`
136
137)
138
139);
140
141
142
143return result;
144
145};
146
147
148
149/**
150
151*
152
153* @param {NextApiRequest} req
154
155*/
156
157const handlePostRequest = async (req) => {
158
159// Get the form data using the parseForm function
160
161const data = await parseForm(req);
162
163
164
165const photo = data.files.photo;
166
167const {
168
169nose: noseFilter,
170
171mouth: mouthFilter,
172
173left_eye: leftEyeFilter,
174
175right_eye: rightEyeFilter,
176
177} = data.fields;
178
179
180
181const faces = await detectFaceLandmarks(photo.filepath);
182
183
184
185const transformations = [];
186
187
188
189for (const face of faces) {
190
191const { landmarks } = face;
192
193
194
195if (noseFilter) {
196
197const nose = landmarks.getNose();
198
199
200
201const centerOfNose = getCenterOfLandmark(nose);
202
203const heightWidthOfNose = getHeightWidthOfLandmark(nose);
204
205
206
207const filter = FILTERS.find((filter) => filter.publicId === noseFilter);
208
209
210
211if (!filter) {
212
213throw new Error("Filter not found");
214
215}
216
217
218
219transformations.push({
220
221overlay:
222
223`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`.replace(
224
225/\//g,
226
227":"
228
229),
230
231width:
232
233Math.round(heightWidthOfNose.width) + filter.presets?.widthOffset ??
234
2350,
236
237height:
238
239Math.round(heightWidthOfNose.height) + filter.presets?.heightOffset ??
240
2410,
242
243crop: "fit",
244
245gravity: "xy_center",
246
247x: Math.round(centerOfNose.x),
248
249y: Math.round(centerOfNose.y),
250
251});
252
253}
254
255if (mouthFilter) {
256
257const mouth = landmarks.getMouth();
258
259
260const centerOfMouth = getCenterOfLandmark(mouth);
261
262const heightWidthOfMouth = getHeightWidthOfLandmark(mouth);
263
264
265
266const filter = FILTERS.find((filter) => filter.publicId === mouthFilter);
267
268
269
270if (!filter) {
271
272throw new Error("Filter not found");
273
274}
275
276
277
278transformations.push({
279
280overlay:
281
282`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`.replace(
283
284/\//g,
285
286":"
287
288),
289
290width:
291
292Math.round(heightWidthOfMouth.width) + filter.presets?.widthOffset ??
293
2940,
295
296height:
297
298Math.round(heightWidthOfMouth.height) +
299
300filter.presets?.heightOffset ?? 0,
301
302crop: "fit",
303
304gravity: "xy_center",
305
306x: Math.round(centerOfMouth.x),
307
308y: Math.round(centerOfMouth.y + heightWidthOfMouth.height),
309
310});
311
312}
313
314
315
316if (leftEyeFilter) {
317
318const leftEye = landmarks.getLeftEye();
319
320
321
322const centerOfLeftEye = getCenterOfLandmark(leftEye);
323
324const heightWidthOfLeftEye = getHeightWidthOfLandmark(leftEye);
325
326
327
328const filter = FILTERS.find(
329
330(filter) => filter.publicId === leftEyeFilter
331
332);
333
334
335
336if (!filter) {
337
338throw new Error("Filter not found");
339
340}
341
342
343
344transformations.push({
345
346overlay:
347
348`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`.replace(
349
350/\//g,
351
352":"
353
354),
355
356width:
357
358Math.round(heightWidthOfLeftEye.width) +
359
360filter.presets?.widthOffset ?? 0,
361
362height:
363
364Math.round(heightWidthOfLeftEye.height) +
365
366filter.presets?.heightOffset ?? 0,
367
368crop: "fit",
369
370gravity: "xy_center",
371
372x: Math.round(centerOfLeftEye.x),
373
374y: Math.round(centerOfLeftEye.y),
375
376});
377
378}
379
380
381
382if (rightEyeFilter) {
383
384const rightEye = landmarks.getRightEye();
385
386
387
388const centerOfRightEye = getCenterOfLandmark(rightEye);
389
390const heightWidthOfRightEye = getHeightWidthOfLandmark(rightEye);
391
392
393
394const filter = FILTERS.find(
395
396(filter) => filter.publicId === rightEyeFilter
397
398);
399
400
401
402if (!filter) {
403
404throw new Error("Filter not found");
405
406}
407
408
409
410transformations.push({
411
412overlay:
413
414`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`.replace(
415
416/\//g,
417
418":"
419
420),
421
422width:
423
424Math.round(heightWidthOfRightEye.width) +
425
426filter.presets?.widthOffset ?? 0,
427
428height:
429
430Math.round(heightWidthOfRightEye.height) +
431
432filter.presets?.heightOffset ?? 0,
433
434crop: "fit",
435
436gravity: "xy_center",
437
438x: Math.round(centerOfRightEye.x),
439
440y: Math.round(centerOfRightEye.y),
441
442});
443
444}
445
446}
447
448
449
450const uploadResult = await handleCloudinaryUpload({
451
452file: photo.filepath,
453
454transformation: transformations,
455
456inFolder: true,
457
458});
459
460
461
462return uploadResult;
463
464};

This endpoint is just slightly different from the api/filters endpoint. In this one, we export a config object at the top in addition to the default export function that handles the requests. The config object instructs Next.js not to use the default built-in body-parser. This is because we're expecting form data and we want to parse this ourselves using formidable. Read more about custom config for api routes in the documentation.

This time around we want to handle GET and POST requests. handleGetRequest gets all images uploaded to the face-landmark-filters/ folder. We also want to filter out any images inside the face-landmark-filters/filters/ folder because those are just our filter images.

handlePostRequest takes in the incoming request object and passes it to the parseForm function that we created earlier. This parses the incoming form data. From the data, we get the photo that has been uploaded

1// ...
2
3const photo = data.files.photo;
4
5// ...

as well as which filters to use for the nose, mouth, and eyes.

1// ...
2
3const {
4
5nose: noseFilter,
6
7mouth: mouthFilter,
8
9left_eye: leftEyeFilter,
10
11right_eye: rightEyeFilter,
12
13} = data.fields;
14
15// ...

We then call the detectFaceLandmarks function and pass the uploaded photo to get all faces and landmarks.

1// ...
2
3const faces = await detectFaceLandmarks(photo.filepath);
4
5// ...

For every detected face, we get the landmarks using javascript object destructuring,

1// ...
2
3for (const face of faces) {
4
5const { landmarks } = face;
6
7// ...

then we check the parsed form data to see if the user selected a filter to apply for either the nose, mouth, or eyes. If there's a filter for one of those landmarks we get the landmark coordinates, the center of the landmark, the height, and width of the landmark, and also check if the filter exists in our FILTERS array. Using the nose as an example

1// ...
2
3if (noseFilter) {
4
5const nose = landmarks.getNose();
6
7
8
9const centerOfNose = getCenterOfLandmark(nose);
10
11const heightWidthOfNose = getHeightWidthOfLandmark(nose);
12
13
14
15const filter = FILTERS.find((filter) => filter.publicId === noseFilter);
16
17// ...

For every filter that we need to apply, we push a transformation object to the transformations array. Read about transformations in-depth in the cloudinary image transformations docs. To apply an overlay transformation, we need to pass the following transformation object

1// This is just an example using sample values
2
3{
4
5overlay: 'resource public id',
6
7width: 100,
8
9height: 100,
10
11crop: 'crop style', // which crop style to use if you need to crop
12
13gravity: 'gravity', // where to position the overlay relative to
14
15x: 100, // x coordinates relative to the gravity
16
17y: 100, // y coordinates relative to the gravity
18
19}

In our case, for the overlay, we're using the filter's folder + its publicId value. This is why I mentioned earlier to make sure that publicId is unique when adding filters to the FILTERS array. For the width and height, we use the landmark's approximate height and width + their offset presets. For the crop value, we use fit. Read about all possible values here. For gravity, we use xy_center which is a special position. This places our overlay's center at our x and y values. Read about this here. For our x and y, we just use the center of the landmark.

For a bit more context, check out this documentation on placing layers on images.

Once we have our transformations ready, we upload the photo to cloudinary using the handleCloudinaryUpload function and pass the transformations to the transformation field.


Next thing, create a file called [...id].js under the pages/api/images folder. This file will handle api requests made to the /api/images/:id endpoint. Paste the following code inside.

1// pages/api/images
2
3import { NextApiRequest, NextApiResponse, NextApiHandler } from "next";
4
5import { handleCloudinaryDelete } from "../../../lib/cloudinary";
6
7
8
9/**
10
11* @type {NextApiHandler}
12
13* @param {NextApiRequest} req
14
15* @param {NextApiResponse} res
16
17*/
18
19export default async function handler(req, res) {
20
21let { id } = req.query;
22
23
24
25if (!id) {
26
27res.status(400).json({ error: "Missing id" });
28
29return;
30
31}
32
33
34
35if (Array.isArray(id)) {
36
37id = id.join("/");
38
39}
40
41
42
43switch (req.method) {
44
45case "DELETE": {
46
47try {
48
49const result = await handleDeleteRequest(id);
50
51
52
53return res.status(200).json({ message: "Success", result });
54
55} catch (error) {
56
57console.error(error);
58
59return res.status(400).json({ message: "Error", error });
60
61}
62
63}
64
65
66
67default: {
68
69return res.status(405).json({ message: "Method not allowed" });
70
71}
72
73}
74
75}
76
77
78
79const handleDeleteRequest = async (id) => {
80
81const result = await handleCloudinaryDelete([id]);
82
83
84
85return result;
86
87};

This endpoint only accepts DELETE requests. handleDeleteRequest passes an images public id to handleCloudinaryDelete and deletes the image from cloudinary. The destructured array syntax for the file name is used to match all routes that come after a dynamic route. For example to handle routes such as /api/images/:id/:anotherId/ or /api/images/:id/someAction/ instead of just /api/images/:id/. Read this documentation to get a much better explanation.


We can finally move on to the front end. This is just some basic React.js and I won't be focusing too much on explaining what each bit does.

Add the following code inside styles/globals.css

1a:hover {
2
3text-decoration: underline;
4
5}
6
7
8
9
10:root {
11
12--color-primary: #ffee00;
13
14}
15
16
17
18.button {
19
20background-color: var(--color-primary);
21
22border-radius: 5px;
23
24border: none;
25
26color: #000000;
27
28text-transform: uppercase;
29
30padding: 1rem;
31
32font-size: 1rem;
33
34font-weight: 700;
35
36cursor: pointer;
37
38transition: all 0.2s;
39
40min-width: 50px;
41
42}
43
44
45
46.danger {
47
48color: #ffffff;
49
50background-color: #cc0000;
51
52}
53
54
55
56.button:hover:not([disabled]) {
57
58filter: brightness(96%);
59
60box-shadow: 0px 2px 4px rgba(0, 0, 0, 0.2);
61
62}
63
64
65
66.button:disabled {
67
68opacity: 0.5;
69
70cursor: not-allowed;
71
72}

These are some global styles that we're going to be using in our components.

Create a folder called components at the root of your project and then create a file called Layout.js inside it. Paste the following code inside components/Layout.js

1import Head from "next/head";
2
3import Link from "next/link";
4
5
6
7export default function Layout({ children }) {
8
9return (
10
11<div>
12
13<Head>
14
15<title>Face Landmarks Filters</title>
16
17<meta name="description" content="Face Landmarks Filters" />
18
19<link rel="icon" href="/favicon.ico" />
20
21</Head>
22
23
24
25<nav>
26
27<Link href="/">
28
29<a>Home</a>
30
31</Link>
32
33<Link href="/images">
34
35<a>Images</a>
36
37</Link>
38
39</nav>
40
41
42
43<main>{children}</main>
44
45<style jsx>{`
46
47nav {
48
49height: 100px;
50
51background-color: var(--color-primary);
52
53display: flex;
54
55flex-flow: row wrap;
56
57justify-content: center;
58
59align-items: center;
60
61gap: 10px;
62
63}
64
65
66
67nav a {
68
69font-weight: bold;
70
71letter-spacing: 1px;
72
73}
74
75
76
77main {
78
79min-height: calc(100vh- 100px);
80
81background-color: #f4f4f4;
82
83}
84
85`}</style>
86
87</div>
88
89);
90
91}

We're going to use this to wrap all of our components. This achieves some structural consistency and also avoids code duplication.

Paste the following code inside pages/index.js.

1import { useCallback, useEffect, useState } from "react";
2
3import Layout from "../components/Layout";
4
5import Image from "next/image";
6
7import { useRouter } from "next/router";
8
9import {
10
11CLOUDINARY_FOLDER_NAME,
12
13FILTERS,
14
15FILTERS_FOLDER_NAME,
16
17} from "../lib/constants";
18
19
20
21export default function Home() {
22
23const router = useRouter();
24
25
26
27const [filters, setFilters] = useState(null);
28
29
30
31/**
32
33* @type {[File, (file:File)=>void]}
34
35*/
36
37const [image, setImage] = useState(null);
38
39
40
41/**
42
43* @type {[boolean, (uploading:boolean)=>void]}
44
45*/
46
47const [loading, setLoading] = useState(false);
48
49
50
51/**
52
53* @type {[boolean, (uploading:boolean)=>void]}
54
55*/
56
57const [uploadInProgress, setUploadInProgress] = useState(false);
58
59
60
61const getFilters = useCallback(async () => {
62
63try {
64
65setLoading(true);
66
67const response = await fetch("/api/filters", {
68
69method: "GET",
70
71});
72
73
74
75const data = await response.json();
76
77
78
79if (!response.ok) {
80
81throw data;
82
83}
84
85
86
87setFilters(
88
89FILTERS.map((filter) => {
90
91const resource = data.result.find((result) => {
92
93return (
94
95result.public_id ===
96
97`${CLOUDINARY_FOLDER_NAME}${FILTERS_FOLDER_NAME}${filter.publicId}`
98
99);
100
101});
102
103
104
105return {
106
107...filter,
108
109resource,
110
111};
112
113}).filter((filter) => filter.resource)
114
115);
116
117} catch (error) {
118
119console.error(error);
120
121} finally {
122
123setLoading(false);
124
125}
126
127}, []);
128
129
130
131useEffect(() => {
132
133getFilters();
134
135}, [getFilters]);
136
137
138
139const handleFormSubmit = async (event) => {
140
141event.preventDefault();
142
143
144
145try {
146
147setUploadInProgress(true);
148
149
150
151const formData = new FormData(event.target);
152
153
154
155const response = await fetch("/api/images", {
156
157method: "POST",
158
159body: formData,
160
161});
162
163
164
165const data = await response.json();
166
167
168
169if (!response.ok) {
170
171throw data;
172
173}
174
175
176
177router.push("/images");
178
179} catch (error) {
180
181console.error(error);
182
183} finally {
184
185setUploadInProgress(false);
186
187}
188
189};
190
191
192
193return (
194
195<Layout>
196
197<div className="wrapper">
198
199<form onSubmit={handleFormSubmit}>
200
201{loading ? (
202
203<small>getting filters...</small>
204
205) : (
206
207<small>Ready. {filters?.length} filters available</small>
208
209)}
210
211
212
213{filters && (
214
215<div className="filters">
216
217{filters.map((filter) => (
218
219<div key={filter.resource.public_id} className="filter">
220
221<label htmlFor={filter.publicId}>
222
223<Image
224
225src={filter.resource.secure_url}
226
227alt={filter.resource.secure_url}
228
229layout="fill"
230
231></Image>
232
233</label>
234
235<input
236
237type="radio"
238
239name={filter.landmark}
240
241id={filter.publicId}
242
243value={filter.publicId}
244
245disabled={uploadInProgress}
246
247></input>
248
249</div>
250
251))}
252
253</div>
254
255)}
256
257
258
259{image && (
260
261<div className="preview">
262
263<Image
264
265src={URL.createObjectURL(image)}
266
267alt="Image preview"
268
269layout="fill"
270
271></Image>
272
273</div>
274
275)}
276
277<div className="form-group file">
278
279<label htmlFor="photo">Click to select photo</label>
280
281<input
282
283type="file"
284
285id="photo"
286
287name="photo"
288
289multiple={false}
290
291hidden
292
293accept=".png,.jpg,.jpeg"
294
295disabled={uploadInProgress}
296
297onInput={(event) => {
298
299setImage(event.target.files[0]);
300
301}}
302
303/>
304
305</div>
306
307
308
309<button
310
311className="button"
312
313type="submit"
314
315disabled={!image || uploadInProgress || !filters}
316
317>
318
319Upload
320
321</button>
322
323</form>
324
325</div>
326
327<style jsx>{`
328
329div.wrapper {
330
331height: 100vh;
332
333display: flex;
334
335flex-direction: column;
336
337justify-content: center;
338
339align-items: center;
340
341}
342
343
344
345div.wrapper form {
346
347width: 60%;
348
349max-width: 600px;
350
351min-width: 300px;
352
353padding: 20px;
354
355border-radius: 5px;
356
357display: flex;
358
359flex-direction: column;
360
361justify-content: start;
362
363align-items: center;
364
365gap: 20px;
366
367background-color: #ffffff;
368
369}
370
371
372
373div.wrapper form div.preview {
374
375position: relative;
376
377height: 200px;
378
379width: 100%;
380
381object-fit: cover;
382
383}
384
385
386
387div.wrapper form div.filters {
388
389width: 100%;
390
391height: 200px;
392
393display: flex;
394
395flex-flow: row wrap;
396
397justify-content: center;
398
399align-items: center;
400
401gap: 5px;
402
403}
404
405
406
407div.wrapper form div.filters div.filter {
408
409flex: 0 0 50px;
410
411display: flex;
412
413flex-flow: row-reverse nowrap;
414
415padding: 10px;
416
417border: 1px solid #cccccc;
418
419border-radius: 5px;
420
421}
422
423
424
425div.wrapper form div.filters div.filter label {
426
427position: relative;
428
429width: 100px;
430
431height: 100px;
432
433}
434
435
436
437div.wrapper form div.form-group {
438
439width: 100%;
440
441display: flex;
442
443flex-direction: column;
444
445justify-content: center;
446
447align-items: flec-start;
448
449}
450
451
452
453div.wrapper form div.form-group.file {
454
455background-color: #f1f1f1;
456
457height: 150px;
458
459border-radius: 5px;
460
461cursor: pointer;
462
463display: flex;
464
465justify-content: center;
466
467align-items: center;
468
469}
470
471
472
473div.wrapper form div.form-group label {
474
475font-weight: bold;
476
477height: 100%;
478
479width: 100%;
480
481cursor: pointer;
482
483display: flex;
484
485justify-content: center;
486
487align-items: center;
488
489}
490
491
492
493div.wrapper form div.form-group.file input {
494
495height: 100%;
496
497width: 100%;
498
499cursor: pointer;
500
501}
502
503
504
505div.wrapper form button {
506
507width: 100%;
508
509}
510
511`}</style>
512
513</Layout>
514
515);
516
517}

Notice the use of a number of React hooks. Read about the useState hook here and the useCallback and useEffect hooks here. The docs have covered their uses pretty well and it's easy to understand. We use the useEffect hook to call the memoized function getFilters. getFilters makes a GET request to the api/filters endpoint to get all filters available. In the body of our component, we have a form where the user can select what filters to apply and also select a photo for upload. We use a radio button group to ensure the user doesn't select more than one filter for the same facial landmark. When the form is submitted, the handleFormSubmit function is triggered. This function makes a POST request to the api/images endpoint with the form data as the body. On success, we navigate to the /images page that we'll be creating next. Read about useRouter here.

Create a new file under pages/ called images.js. Paste the following inside pages/images.js.

1import { useCallback, useEffect, useState } from "react";
2
3import Layout from "../components/Layout";
4
5import Link from "next/link";
6
7import Image from "next/image";
8
9
10
11export default function Images() {
12
13const [images, setImages] = useState([]);
14
15
16
17const [loading, setLoading] = useState(false);
18
19
20
21const getImages = useCallback(async () => {
22
23try {
24
25setLoading(true);
26
27
28
29const response = await fetch("/api/images", {
30
31method: "GET",
32
33});
34
35
36
37const data = await response.json();
38
39
40
41if (!response.ok) {
42
43throw data;
44
45}
46
47
48
49setImages(data.result.resources);
50
51} catch (error) {
52
53console.error(error);
54
55} finally {
56
57setLoading(false);
58
59}
60
61}, []);
62
63
64
65useEffect(() => {
66
67getImages();
68
69}, [getImages]);
70
71
72
73const handleDownloadResource = async (url) => {
74
75try {
76
77setLoading(true);
78
79
80
81const response = await fetch(url, {});
82
83
84
85if (response.ok) {
86
87const blob = await response.blob();
88
89
90
91const fileUrl = URL.createObjectURL(blob);
92
93
94
95const a = document.createElement("a");
96
97a.href = fileUrl;
98
99a.download = `face-landmark-filters.${url.split(".").at(-1)}`;
100
101document.body.appendChild(a);
102
103a.click();
104
105a.remove();
106
107return;
108
109}
110
111
112
113throw await response.json();
114
115} catch (error) {
116
117// TODO: Show error message to the user
118
119console.error(error);
120
121} finally {
122
123setLoading(false);
124
125}
126
127};
128
129
130
131const handleDelete = async (id) => {
132
133try {
134
135setLoading(true);
136
137
138
139const response = await fetch(`/api/images/${id}`, {
140
141method: "DELETE",
142
143});
144
145
146
147const data = await response.json();
148
149
150
151if (!response.ok) {
152
153throw data;
154
155}
156
157
158
159getImages();
160
161} catch (error) {
162
163} finally {
164
165setLoading(false);
166
167}
168
169};
170
171
172
173return (
174
175<Layout>
176
177{images.length > 0 ? (
178
179<div className="wrapper">
180
181<div className="images-wrapper">
182
183{images.map((image) => {
184
185return (
186
187<div className="image-wrapper" key={image.public_id}>
188
189<div className="image">
190
191<Image
192
193src={image.secure_url}
194
195width={image.width}
196
197height={image.height}
198
199layout="responsive"
200
201alt={image.secure_url}
202
203></Image>
204
205</div>
206
207<div className="actions">
208
209<button
210
211className="button"
212
213disabled={loading}
214
215onClick={() => {
216
217handleDownloadResource(image.secure_url);
218
219}}
220
221>
222
223Download
224
225</button>
226
227<button
228
229className="button danger"
230
231disabled={loading}
232
233onClick={() => {
234
235handleDelete(image.public_id);
236
237}}
238
239>
240
241Delete
242
243</button>
244
245</div>
246
247</div>
248
249);
250
251})}
252
253</div>
254
255</div>
256
257) : null}
258
259{!loading && images.length === 0 ? (
260
261<div className="no-images">
262
263<b>No Images Yet</b>
264
265<Link href="/">
266
267<a className="button">Upload some images</a>
268
269</Link>
270
271</div>
272
273) : null}
274
275{loading && images.length === 0 ? (
276
277<div className="loading">
278
279<b>Loading...</b>
280
281</div>
282
283) : null}
284
285<style jsx>{`
286
287div.wrapper {
288
289min-height: 100vh;
290
291background-color: #f4f4f4;
292
293}
294
295
296
297div.wrapper div.images-wrapper {
298
299display: flex;
300
301flex-flow: row wrap;
302
303gap: 10px;
304
305padding: 10px;
306
307}
308
309
310
311div.wrapper div.images-wrapper div.image-wrapper {
312
313flex: 0 0 400px;
314
315display: flex;
316
317flex-flow: column;
318
319}
320
321
322
323div.wrapper div.images-wrapper div.image-wrapper div.image {
324
325background-color: #ffffff;
326
327position: relative;
328
329width: 100%;
330
331}
332
333
334
335div.wrapper div.images-wrapper div.image-wrapper div.actions {
336
337background-color: #ffffff;
338
339padding: 10px;
340
341display: flex;
342
343flex-flow: row wrap;
344
345gap: 10px;
346
347}
348
349
350
351div.loading,
352
353div.no-images {
354
355height: 100vh;
356
357display: flex;
358
359align-items: center;
360
361justify-content: center;
362
363flex-flow: column;
364
365gap: 10px;
366
367}
368
369`}</style>
370
371</Layout>
372
373);
374
375}

This is a simple page. We call the getImages function when the component is mounted. getImages then makes a GET request to the /api/images endpoint to get all uploaded images(These will be the images that already have a filter applied to them). For the body, we just show the images in a flexbox container. Each image has a download and delete button.

That's about it. I may have rushed over the UI part, however, the React.js and Next.js docs explain most of those things extremely well. You can always look anything up you might have issues with there.

The last thing we need to do is configure our Next.js project to be able to display images from cloudinary. Next.js does a lot of things under the hood to optimize the performance of your applications. One of these things is optimizing images when using the Image component from Next.js. We need to add Cloudinary's domain to our config file. Read more about this here. Add the following to next.config.js. If you don't find the file at the root of your project you can create it yourself.

1module.exports = {
2
3// ...
4
5images: {
6
7domains: ["res.cloudinary.com"],
8
9},
10
11};

Our application is now ready to run.

1npm run dev

You can find the full source code on my Github. Remember, this is a simple implementation for demonstration purposes. You can always optimize a few things for use in the real world.

Eugene Musebe

Software Developer

I’m a full-stack software developer, content creator, and tech community builder based in Nairobi, Kenya. I am addicted to learning new technologies and loves working with like-minded people.