Cloudinary provides an Advanced Facial Attribute Detection add-on, an integrated face detection solution that utilizes Microsoft Cognitive Services to automatically extract meaningful advanced data about the face(s) in an image. It also extracts other face-related attributes and the exact location of notable facial features. This add-on is fully integrated into Cloudinary's image management and transformation pipeline, allowing us to extend Cloudinary's features that involve semantic photo data extraction, image cropping, and the positioning of image overlays.
In this post, we'll create a simple app that illustrates how to extract advanced facial attributes from an image, crop, and add an image overlay based on the extracted data.
Here is a link to the demo CodeSandbox.
Setting up the Project
Create a new Next.js application using the following command:
1npx create-next-app facial-attributes-detection
Run these commands to navigate into the project directory and install the required dependencies:
1cd facial-attributes-detection2npm install cloudinary axios
The Cloudinary Node SDK will provide easy-to-use methods to interact with the Cloudinary APIs, while axios will serve as our HTTP client.
Now we can start our application on http://localhost:3000/ using the following command:
1npm run dev
Cloudinary Setup
First, sign up for a free Cloudinary account if you don’t have one already. Displayed on your account’s Management Console (aka Dashboard) are important details: your cloud name, API key, etc.
Next, let’s create environment variables to hold the details of our Cloudinary account. Create a new file called .env
at the root of your project and add the following to it:
1CLOUD_NAME = YOUR CLOUD NAME HERE2API_KEY = YOUR API API KEY3API_SECRET = YOUR API API SECRET
This will be used as a default when the project is set up on another system. To update your local environment, create a copy of the .env
file using the following command:
1cp .env .env.local
By default, this local file resides in the .gitignore
folder, mitigating the security risk of inadvertently exposing secret credentials to the public. You can update the .env.local
file with your Cloudinary credentials.
Cloudinary makes it compulsory that we subscribe to an add-on before we can use it. To register for the Advanced facial attributes detection add-on, follow the steps below:
- Click on the Add-ons link in your Cloudinary console.
- You should see a page consisting of all the available Cloudinary add-ons. Scroll down to locate the Advanced Facial Attributes Detection add-on, click on it and select your preferred plan. We'll be using the free plan for this project, which gives us 50 free detections monthly.
Extract Facial Attributes on Upload
A detailed object comprising facial attributes of faces detected in an image can be extracted by setting the detection parameter to adv_face
when uploading an image to Cloudinary using the upload API. Data detected and extracted by the add-on are stored in a data
key nested in an info
node of the JSON response.
The value stored in the data
key is an array of objects, with each object holding full details about the individual faces detected. The details are divided into attributes
, bounding_box
, and facial_landmarks
.
attributes: holds a key-value pair of general information such as the expression expressed by an individual, details about the hair, gender, make-up, and so on.
bounding_box: contains details about the bounding box surrounding a detected face, height, width, etc.
facial_landmarks: contains the exact position details of specific elements of the mouth, eyebrows, eyes, and nose.
Let’s upload an image to Cloudinary and set the detection parameter to adv_face
to see the complete response returned. Create a file named upload.js
in the pages/api
directory and add the following to it:
1const cloudinary = require("cloudinary").v2;23cloudinary.config({4 cloud_name: process.env.CLOUD_NAME,5 api_key: process.env.API_KEY,6 api_secret: process.env.API_SECRET,7 secure: true,8});910export default async function handler(req, res) {11 try {12 const response = await cloudinary.uploader.upload(req.body.image, {13 detection: "adv_face",14 });15 res.status(200).json(response);16 } catch (error) {17 res.status(500).json(error);18 }19}2021export const config = {22 api: {23 bodyParser: {24 sizeLimit: "4mb",25 },26 },27};
In the code above, we defined an upload
API route to handle file upload to Cloudinary. We import Cloudinary and configure it with an object consisting of our Cloudinary credentials. Next, we define a route handler, which calls the Cloudinary upload
method and passes the expected base64 image with an object to set the detection
parameter as arguments. The response is sent back to the client; otherwise, an error is sent.
At the bottom of the file, we export the Next.js default config object to set the default payload size limit to 4MB.
Now let's create a client-side for selecting and forwarding any selected image to the /upload
route. Clear the existing content in your pages/index.js
file and update it with the following:
1import { useState } from "react";2import axios from "axios";3import styles from "../styles/Home.module.css";45export default function Home() {6 const [image, setImage] = useState("");7 const [uploadStatus, setUploadStatus] = useState();8 const [imageId, setImageId] = useState("");910 const handleImageChange = (e, setStateFunc) => {11 const reader = new FileReader();12 if (!e.target.files[0]) return;13 reader.readAsDataURL(e.target.files[0]);14 reader.onload = function (e) {15 setStateFunc(e.target.result);16 };17 };1819 const handleUpload = async () => {20 setUploadStatus("Uploading...");21 try {22 const response = await axios.post("/api/upload", { image });23 setImageId(response.data.public_id);24 setUploadStatus("Upload successful");25 console.log(response.data);26 } catch (error) {27 setUploadStatus("Upload failed..");28 }29 };3031 return (32 <main className={styles.main}>33 <h2>Facial attributes detection</h2>34 <div>35 <div className={styles.input}>36 <div>37 <label htmlFor="image">38 {image ? (39 <img src={image} alt="image" />40 ) : (41 "Click to select image"42 )}43 </label>44 <input45 type="file"46 id="image"47 onChange={(e) => handleImageChange(e, setImage)}48 />49 </div>50 <button onClick={handleUpload}>Upload</button>51 <p>{uploadStatus}</p>52 </div>53 </div>54 </main>55 );56}
In the code above, we defined the Home
component to hold three states for the selected image, the request status, and a Cloudinary-generated ID. Next, we rendered a file input field and worked around opening the custom file picker that triggers the handleImageChange
function when a file is selected. The function then converts the selected image to its base64 equivalent.
We also rendered a button that calls the handleUpload
function on click. handleUpload
makes an Axios call to our API route and sets the required states accordingly. We also logged the complete response to the console.
Now let's add some styles to give our application a decent look. Copy the styles in this codeSandbox link to your styles/Home.module.css
file.
Next, preview the application in your browser and upload an image with faces. Then open the developer console to see the complete JSON response object.
Displayed below is a closer look at the response object.
We can also use Cloudinary's Admin API to apply automatic face attribute detection to uploaded images based on their public IDs. To achieve this, call the update
method of the Admin API and set the detection
parameter to adv_face
, as shown below.
1const response = await cloudinary.v2.api.update("public-id", {2 detection: "adv_face",3});
Crop Images Based on Detected Faces
As mentioned earlier, the Advanced Facial Attribute Detection add-on is fully integrated into Cloudinary's image management and transformation pipeline. Therefore, we can crop and apply other transformations to the image based on the position of facial attributes detected by the Advanced Facial Attribute Detection add-on.
To crop a processed image so it focuses on the detected faces in the image, we need to set the gravity
parameter to adv_faces
or adv_face
to focus on the single largest detected face in the image when calling the image
method of Cloudinary's image transformation API.
We also need to specify the width
and height
parameters and set the crop
parameter to either crop
, thumb
, or fill
. Click here to learn more about the various image resizing and cropping options.
To add the cropping functionality to our application, create a crop.js
file in the pages/api
folder and add the following to it:
1const cloudinary = require("cloudinary").v2;23cloudinary.config({4 cloud_name: process.env.CLOUD_NAME,5 api_key: process.env.API_KEY,6 api_secret: process.env.API_SECRET,7 secure: true,8});910export default async function handler(req, res) {11 try {12 const response = await cloudinary.image(`${req.body.imageId}.jpg`, {13 gravity: "adv_faces",14 height: 240,15 width: 240,16 crop: "thumb",17 sign_url: true,18 });19 res.status(200).json(response);20 } catch (error) {21 res.status(500).json(error);22 }23}
In the code above, in addition to the gravity
, width
, height
, and crop
parameters, we set the sign_url
parameter to true
to reduce the potential costs of users accessing unplanned dynamic URLs with the Advanced Facial Attribute Detection cropping directives.
The expected response sent back to the client-side will be an <img>
element with a URL that links to the cropped image.
Let's update the client-side of the application to reflect the changes. Update your pages/index.js
file with the following:
1export default function Home() {2 //...34 // Add this5 const [cldData, setCldData] = useState("");67 const handleImageChange = (e, setStateFunc) => {8 //...9 };1011 const handleUpload = async () => {12 //...13 };1415 const handleCrop = async () => {16 setUploadStatus("Cropping...");17 try {18 const response = await axios.post("/api/crop", { imageId });19 const imageUrl = /'(.+)'/.exec(response.data)[1].split("' ")[0];20 setCldData(imageUrl);21 setUploadStatus("done");22 } catch (error) {23 setUploadStatus("failed..");24 }25 };2627 return (28 <main className={styles.main}>29 <h2>Facial attributes detection</h2>30 <div>31 <div className={styles.input}>32 <div>33 <label htmlFor="image">34 {image ? (35 <img src={image} alt="image" />36 ) : (37 "Click to select image"38 )}39 </label>40 <input41 type="file"42 id="image"43 onChange={(e) => handleImageChange(e, setImage)}44 />45 </div>46 <button onClick={handleUpload}>Upload</button>47 <p>{uploadStatus}</p>48 {/* Add this */}49 <div className={styles.btns}>50 <button disabled={!imageId} onClick={handleCrop}>51 Crop52 </button>53 </div>54 </div>55 {/* Add this */}56 <div className={styles.output}>57 {cldData ? <img src={cldData} alt=" " /> : "Output image"}58 </div>59 </div>60 </main>61 );62}
In the updated code, we defined a state called cldData
to hold the expected URL of the cropped image. We also rendered a button and an image with the URL saved in the cldData
state.
The button gets disabled until a valid Cloudinary image ID is returned after uploading; it triggers the handleCrop
function when clicked. The function initiates an Axios call to the /crop
API route to get the <img>
element response returned by Cloudinary. It then extracts the URL from the response and sets the states accordingly.
Save the changes and preview the application in your browser. You should be able to upload and crop an image based on the detected faces.
In addition to cropping processed images based on the detected faces, Cloudinary also supports eye detection-based cropping. It automatically crops images based on the position of detected eyes, leveraging the data detected by the add-on. To implement this, change the update your pages/api/crop.js
file with the following:
1const response = await cloudinary.image(`${req.body.imageId}.jpg`, {2 gravity: "adv_eyes", // add this3 height: 240,4 width: 240,5 crop: "thumb",6 sign_url: true,7});
Apply Face Overlay on Detected Faces
While considering the pose of the face detected in the extracted facial attribute, Cloudinary can position overlays on top of detected faces and even automatically scale and rotate the overlay according to how the underlying face is positioned.
To properly place an overlay on all detected faces in a processed image, set an overlay parameter to the public ID of your preferred overlay image and the gravity parameter of the added overlay to adv_faces
. We also need to set the region_relative
flag together with a width
and crop
value. The width
takes a relative value that scales the overlay to 110% of the width of the detected face.
Let's update our application to include this functionality. Create a file called overlay.js
in the pages/api
folder and add the following to it:
1const cloudinary = require("cloudinary").v2;23cloudinary.config({4 cloud_name: process.env.CLOUD_NAME,5 api_key: process.env.API_KEY,6 api_secret: process.env.API_SECRET,7 secure: true,8});910export default async function handler(req, res) {11 const { imageId, overlay } = req.body;12 try {13 await cloudinary.uploader.upload(14 overlay,15 async function (error, uploadedOverlay) {16 const response = await cloudinary.image(`${imageId}.jpg`, {17 transformation: [18 { overlay: `${uploadedOverlay.public_id}` },19 { flags: "region_relative", width: "1.1", crop: "scale" },20 { flags: "layer_apply", gravity: "adv_faces" },21 ],22 sign_url: true,23 });24 res.status(200).json(response);25 }26 );27 } catch (error) {28 res.status(500).json(error);29 }30}3132export const config = {33 api: {34 bodyParser: {35 sizeLimit: "4mb",36 },37 },38};
With the code above, we created a new API route to handle applying overlay to a processed image. It expects the public id of the image that needs to be transformed and an overlay image from the client side.
The approach is similar to the one used in the previous API route files, except that we now configured the route handler to upload the overlay image to Cloudinary first to extract its public id from the response. Next, we called the image transformation method and set the overlay
parameter to the extracted public id of the uploaded overlay image.
Let's update the frontend code. Open your pages/index.js
file and update the code as shown below:
1export default function Home() {2 //...3 //add this4 const [overlay, setOverlay] = useState("");56 const handleImageChange = (e, setStateFunc) => {7 //...8 };910 const handleUpload = async () => {11 //...12 };1314 const handleCrop = async () => {15 //...16 };1718 // add this19 const handleAddOverlay = async () => {20 setUploadStatus("Adding overlay...");21 try {22 const response = await axios.post("/api/overlay", { imageId, overlay });23 const imageUrl = /'(.+)'/.exec(response.data)[1];24 setCldData(imageUrl);25 setUploadStatus("done");26 } catch (error) {27 setUploadStatus("failed..");28 }29 };3031 return (32 <main className={styles.main}>33 <h2>Facial attributes detection</h2>34 <div>35 <div className={styles.input}>36 <div>37 <label htmlFor="image">38 {image ? (39 <img src={image} alt="image" />40 ) : (41 "Click to select image"42 )}43 </label>44 <input45 type="file"46 id="image"47 onChange={(e) => handleImageChange(e, setImage)}48 />49 </div>50 <button onClick={handleUpload}>Upload</button>51 <p>{uploadStatus}</p>52 <div className={styles.btns}>53 <button disabled={!imageId} onClick={handleCrop}>54 Crop55 </button>56 {/* add this */}57 <button disabled={!imageId || !overlay} onClick={handleAddOverlay}>58 Add Overlay59 </button>60 </div>61 {/* add this */}62 <div className={styles.overlay}>63 <label>Select Overlay</label>64 <input65 type="file"66 onChange={(e) => handleImageChange(e, setOverlay)}67 />68 </div>69 </div>70 <div className={styles.output}>71 {cldData ? <img src={cldData} alt=" " /> : "Output image"}72 </div>73 </div>74 </main>75 );76}
We added a new state called overlay
to hold the base64 equivalent of the overlay image selected by the user. Next, we added a <input>
tag with a file
type to select an overlay image and a button that triggers the handleAddOverlay
function when clicked.
The function initiates an Axios call to the /overlay
API route and attaches the image ID and the overlay image to the request's body. Next, it formats the response to extract the output image URL and sets the states accordingly.
Now you can save the changes and preview the application in your browser.
Find the complete project here on GitHub.
Conclusion
The Advanced Facial Attribute Detection add-on powered by Cloudinary’s integration with Microsoft's Cognitive Services provides a high-precision mechanism that can seamlessly analyze images to extract specific information about facial attributes. Using a simple Next.js application, we've seen how to use this add-on to extract advanced face attributes and smartly crop, position, rotate, and add overlay images based on these attributes.
Resources You May Find Helpful