With almost everything being online, having the ability to add custom filters to your camera can be pretty useful. We're going to build a React app that lets users adjust a filter for their webcam and then save the image to Cloudinary.
We'll be using p5.js to let us apply filters to a user's camera. Once you finish this project, you'll be able to start working with p5.js in all kinds of media apps.
Setting up the React app
We can use the create-react-app command to generate a new React project with TypeScript enabled so we can be ready to add types from the beginning. Open your terminal and run the following command.
1$ npx create-react-app camera-filter --template typescript
You should see a new folder called camera-filter
and it will have a number of boilerplate files to get us started. There are a few packages we'll need to handle the camera filter and the upload to Cloudinary.
1$ npm i p5 @types/p5 html-to-image
These are the packages we need to make the filter for the camera. Also, if you don't have a Cloudinary account make sure you create a free one before we move forward because you'll need credentials to upload the pictures to this hosting service.
Now we can work on a new component for our camera filter.
Adding the camera filter component
Go to the src
folder at the root of your project and add a new sub-folder called components
. Inside this folder, add a file called CameraFilter.tsx
. This is a common file structure you'll run into with React projects to help keep things organized. Since the camera filter won't be a whole page by itself, we classify it as a component.
Let's start building the camera filter by importing the following packages at the top of the CameraFilter.tsx
file.
1// CameraFilter.tsx23import { useEffect, useRef } from "react";4import { toPng } from "html-to-image";5import p5 from "p5";
These are the main packages we'll start with and we'll slowly add MUI elements as needed. Now let's add the functionality we need for p5.js to apply filters to the camera.
Setting up p5.js
Getting this working React projects can be a little tricky, but once you do it, you have a lot of power over how things are displayed on your site. This package has a lot of really interesting functionality you should check out. Make sure you have the CameraFilter.tsx
file open and add the following code below the imports.
1// CameraFilter.tsx2...34let cam: any, custShader: any;56function sketch(p: any) {7 // p is a reference to the p5 instance this sketch is attached to8 p.preload = () => {9 custShader = p.loadShader("../assets/webcam.vert", "../assets/webcam.frag");10 };1112 p.setup = () => {13 // shaders require WEBGL mode to work14 p.createCanvas(p.windowWidth, p.windowHeight, p.WEBGL);15 p.noStroke();1617 // initialize the webcam at the window size18 cam = p.createCapture(p.VIDEO);19 cam.size(p.windowWidth, p.windowHeight);2021 // hide the html element that createCapture adds to the screen22 cam.hide();23 };2425 p.draw = () => {26 // shader() sets the active shader with our shader27 p.shader(custShader);2829 // lets just send the cam to our shader as a uniform30 custShader.setUniform("tex0", cam);3132 // the size of one pixel on the screen33 custShader.setUniform("stepSize", [1.0 / p.width, 1.0 / p.height]);3435 // how far away to sample from the current pixel36 // 1 is 1 pixel away37 custShader.setUniform("dist", 3.0);3839 // rect gives us some geometry on the screen40 p.rect(0, 0, p.width, p.height);41 };42}
This code sets some variables we need for p5 to work with the camera and a custom shader to make the filter. You can learn more about WebGL shaders here so that you'll know how to make your own shaders. This could be useful if you plan on working with 3D media in your web apps.
Next, we create the sketch
function that will hold all of the methods p5 will call to get the camera filter set up and ready to use when we create a new instance of the p5 object a bit later. The sketch
function implements a few methods that p5 will expect.
In order to use our custom filter, we'll need a preload
method for the p5 object. This will call the loadShader
method from p5 with paths to our shader assets to make the shader ready to use. Then we have the setup
method that tells p5 what to do with the DOM as soon as it loads on the page. Finally, we have the draw
method which applies the shader to the camera we initiated in the setup
method.
The camera component
With all of the p5 setup ready to use, we need to create the component that gets rendered in the browser. Beneath the p5 code, add the following code.
1// CameraFilter.tsx2...34export default function CameraFilter() {5 const p5ContainerRef = useRef();67 useEffect(() => {8 // On component creation, instantiate a p5 object with the sketch and container reference9 const p5Instance = new p5(sketch, p5ContainerRef.current);1011 // On component destruction, delete the p5 instance12 return () => {13 p5Instance.remove();14 };15 }, []);1617 async function submit(e: any) {18 e.preventDefault();1920 if (p5ContainerRef.current === null) {21 return;22 }2324 // @ts-ignore25 const dataUrl = await toPng(p5ContainerRef.current, { cacheBust: true });2627 const uploadApi = `https://api.cloudinary.com/v1_1/your_cloud_name/image/upload`;2829 const formData = new FormData();30 formData.append("file", dataUrl);31 formData.append("upload_preset", "your_upload_preset");3233 await fetch(uploadApi, {34 method: "POST",35 body: formData,36 });37 }3839 return (40 <>41 {/* @ts-ignore */}42 <div id="camera" ref={p5ContainerRef}></div>43 <button type="submit" onClick={submit}>44 Save picture45 </button>46 </>47 );48}
Let's walk through this from the beginning. First, we use the useRef
hook from React to define the HTML reference to the element we'll render the p5 shader on. Then we take advantage of the effect
hook to create an instance of the p5 object using the sketch
function we wrote earlier when the component is created. We also clean up and remove the p5 instance when the component is destroyed to make sure we don't have any weird behavior.
Next, we create the submit
function for when we decide to save a picture that's been altered by our filter. Inside this function, a few things happen. We check and make sure that the referenced element has something in it currently and if it doesn't we just return from the function. If it does have something in it, then we use the html-to-image
package to create a PNG file for the filtered image.
After we have the PNG file, then we make a variable that holds the API connection string to Cloudinary. Make sure you update this to use your own cloud name so that the images will go to your Cloudinary account. Next, we create a new FormData
object that will hold all of the values Cloudinary needs to accept our upload programmatically. You'll also need to update the upload preset to match your Cloudinary account. You can find this value in your account settings.
The last thing we do in the submit
function is call the fetch
API to send our filtered image to Cloudinary. The only things remaining for this component are the actually rendered elements.
As you can see, there isn't much that gets rendered on the page. We have the <div>
that the p5 instance references and a button that calls the submit
function when we want to save an image. All that's left for our camera filter is defining the shader files that define what the filter does to the image.
Writing shader files
As we mentioned earlier, you can learn more about WebGL shaders here and I highly recommend you take a look because understanding what's happening in these files is important. We aren't going to do a deep dive into these files because it takes some understanding of topics outside of the scope of this tutorial. We will however create a couple of files to make our custom shader.
In the src
directory, add a new sub-directory called assets
. Inside that folder, make two new files: webcam.vert
and webcam.frag
. These will make our custom shader. Add the following code to the webcam.vert
file.
1// webcam.vert23attribute vec3 aPosition;4attribute vec2 aTexCoord;56varying vec2 vTexCoord;78void main() {9 vTexCoord = aTexCoord;1011 // copy the position data into a vec4, using 1.0 as the w component12 vec4 positionVec4 = vec4(aPosition, 1.0);13 positionVec4.xy = positionVec4.xy * 2.0 - 1.0;1415 // send the vertex information on to the fragment shader16 gl_Position = positionVec4;17}
Then open the webcam.frag
file and add this code.
1// webcam.frag23precision mediump float;45// our texcoords from the vertex shader6varying vec2 vTexCoord;78// the texture that we want to manipulate9uniform sampler2D tex0;1011// how big of a step to take. 1.0 / width = 1 texel12// doing this math in p5 saves a little processing power13uniform vec2 stepSize;14uniform float dist;1516// an array with 9 vec2's17// each index in the array will be a step in a different direction around a pixel18// upper left, upper middle, upper right19// middle left, middle, middle right20// lower left, lower middle, lower right21vec2 offset[9];2223// the convolution kernel we will use24// different kernels produce different effects25// we can do things like, emboss, sharpen, blur, etc.26float kernel[9];2728// the sum total of all the values in the kernel29float kernelWeight = 0.0;3031// our final convolution value that will be rendered to the screen32vec4 conv = vec4(0.0);3334void main(){3536 vec2 uv = vTexCoord;37 // flip the y uvs38 uv.y = 1.0 - uv.y;3940 // different values in the kernels produce different effects41 // take a look here for some more examples https://en.wikipedia.org/wiki/Kernel_(image_processing) or https://docs.gimp.org/en/plug-in-convmatrix.html4243 // here are a few examples, try uncommenting them to see how they affect the image4445 // emboss kernel46 kernel[0] = -2.0; kernel[1] = -1.0; kernel[2] = 0.0;47 kernel[3] = -1.0; kernel[4] = 1.0; kernel[5] = 1.0;48 kernel[6] = 0.0; kernel[7] = 1.0; kernel[8] = 2.0;4950 // sharpen kernel51 // kernel[0] = -1.0; kernel[1] = 0.0; kernel[2] = -1.0;52 // kernel[3] = 0.0; kernel[4] = 5.0; kernel[5] = 0.0;53 // kernel[6] = -1.0; kernel[7] = 0.0; kernel[8] = -1.0;5455 // gaussian blur kernel56 // kernel[0] = 1.0; kernel[1] = 2.0; kernel[2] = 1.0;57 // kernel[3] = 2.0; kernel[4] = 4.0; kernel[5] = 2.0;58 // kernel[6] = 1.0; kernel[7] = 2.0; kernel[8] = 1.0;5960 // edge detect kernel61 // kernel[0] = -1.0; kernel[1] = -1.0; kernel[2] = -1.0;62 // kernel[3] = -1.0; kernel[4] = 8.0; kernel[5] = -1.0;63 // kernel[6] = -1.0; kernel[7] = -1.0; kernel[8] = -1.0;6465 offset[0] = vec2(-stepSize.x, -stepSize.y); // top left66 offset[1] = vec2(0.0, -stepSize.y); // top middle67 offset[2] = vec2(stepSize.x, -stepSize.y); // top right68 offset[3] = vec2(-stepSize.x, 0.0); // middle left69 offset[4] = vec2(0.0, 0.0); //middle70 offset[5] = vec2(stepSize.x, 0.0); //middle right71 offset[6] = vec2(-stepSize.x, stepSize.y); //bottom left72 offset[7] = vec2(0.0, stepSize.y); //bottom middle73 offset[8] = vec2(stepSize.x, stepSize.y); //bottom right7475 for(int i = 0; i<9; i++){76 //sample a 3x3 grid of pixels77 vec4 color = texture2D(tex0, uv + offset[i]*dist);7879 // multiply the color by the kernel value and add it to our conv total80 conv += color * kernel[i];8182 // keep a running tally of the kernel weights83 kernelWeight += kernel[i];84 }8586 // normalize the convolution by dividing by the kernel weight87 conv.rgb /= kernelWeight;8889 gl_FragColor = vec4(conv.rgb, 1.0);90}
This will create a convolution kernel effect on the camera, giving us our custom filter. You can find the source code for this shader and others in this repo. The shader files are already referenced in code we wrote earlier, so we are finished!
The only thing left to do is run your project with npm start
and take a look at how the filter changes the camera image.
Finished code
You can take a look at the complete code in the camera-filter
folder of this repo. Or you can check it out in this Code Sandbox.
Conclusion
Getting into advanced styling techniques and learning about some of the visualization libraries is a great way to stay ahead of the curve. With all of the virtual interactions we all have, it's a good skill to know how to render more complex things for users.