One of the best ways to engage on Social Media today is Twitter Spaces - a relatively new feature on Twitter that allows users to host audio-based conversations to talk about whatever topics seem interesting to them.
It has seen a lot of traction since its inception and the momentum doesn't seem to be getting lower. Like every other technology, users have already started recommending updates to the feature, like, adding comments to spaces, more reaction emojis etc.
While we wait for those much-needed features, Alex and myself decided to work on a platform that will help Twitter space organizers aggregate, manage and reuse their recorded conversations.
One of the biggest challenges we faced was the ability to download the recorded audio file, seeing as some spaces conversations last a very long time, the file size can get really big. However, Twitter recently made it possible to download the audio, and we figured now would be a good time to make this project.
Technologies we used:
- Firebase - Database
- Cloudinary - Media hosting and storage
- Nextjs - Frontend framework
- TailwindCSS - Styling framework
I should probably write a separate post that will piece together the process of combining all these technologies to achieve the project, but for the scope of this post, I'll limit my writing to how we used Cloudinary to handle the audio files and render it to the client for user consumption.
Cloudinary Integration
First, we thought of a service that could accommodate really large audio files without jeopardizing performance. We considered a few other technologies but ended up going with Cloudinary.
To add Cloudinary to the Next.js project we had to first install it into the project via the CLI with
1npm i cloudinary
Next, we added the Cloudinary CDN script to the project in _document.js
file like so:
1import Document, { Html, Head, Main, NextScript } from "next/document";23class MyDocument extends Document {4 static async getInitialProps(ctx) {5 const initialProps = await Document.getInitialProps(ctx);6 return { ...initialProps };7 }8 render() {9 return (10 <Html>11 <Head>12 <script13 defer14 src="https://widget.cloudinary.com/v2.0/global/all.js"15 type="text/javascript"16 ></script>17 </Head>18 <body>19 <Main />20 <NextScript />21 </body>22 </Html>23 );24 }25}2627export default MyDocument;
The next thing we did was set environment variables to hold our Cloudinary credentials. We built the project with the Netlify CLI, so to create environment variables we did:
1ntl env:set NEXT_PUBLIC_CLOUDINARY_API_KEY "OUR_API_KEY"
We repeated the same procedure for other variables like the API_SECRET
, CLOUD_NAME
, UPLOAD_PRESET
etc.
After we set those variables, we needed to install some more Cloudinary packages to help with the audio file upload. For this, we decided to go with the Cloudinary Upload Widget. It is a handy tool that made it possible for us to maintain the UI design of our project without much overhead.
To install the widget, we ran the command:
1npm i cloudinary/widget,
When a user wants to upload a new space, we render a form that allows them to select both a banner for their space and the audio file containing the recording.
I'm going off on a limb to guess that you're familiar with Cloudinary image uploads so I'll maintain focus on the audio file. Uploading audio files to Cloudinary is the same as uploading videos. The only difference is the file extension.
For instance, if you have a file my-recording.mp4
Cloudinary will treat this as a video file, however, if you change the file extension to my-recording.mp3
Cloudinary will automatically convert it to an audio file.
As a result, we configured our audio file upload logic just the same way we would a video file:
1export function VideoUpload({ userId, spaceId }) {2 const [isAudioUploaded, setIsAudioUploaded] = useState(false);3 const randomId = useMemo(() => createRandomId(), []);45 function handleWidgetClick() {6 const widget = window.cloudinary.createUploadWidget(7 {8 cloudName: process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME,9 uploadPreset: process.env.NEXT_PUBLIC_CLOUDINARY_UPLOAD_PRESET,10 apiKey: process.env.NEXT_PUBLIC_CLOUDINARY_API_KEY,11 publicId: createAudioId(userId, spaceId, randomId),12 resourceType: "video",13 },14 (error, result) => {15 if (!error && result && result.event === "success") {16 setIsAudioUploaded(true);17 }18 }19 );2021 widget.open();22 }2324 return ();25}
The next thing we wanted to account for was security. The default Cloudinary upload method is unsigned
which would allow us to upload files with an upload preset. I highlighted some security concerns of that approach in a separate article. So to curb that, we used the signed
upload method and set up our file upload logic like so:
1import { generateSignature } from "../utils/generateSignature";23 function uploadAsset() {4 const widget = window.cloudinary.createUploadWidget(5 {6 cloudName: process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME,7 uploadPreset: process.env.NEXT_PUBLIC_CLOUDINARY_UPLOAD_PRESET,8 apiKey: process.env.NEXT_PUBLIC_CLOUDINARY_API_KEY,9 uploadSignature: generateSignature,10 publicId: createAudioId(userId, spaceId, randomId),11 },12 (error, result) => {13 if (!error && result && result.event === "success") {14 setIsAudioUploaded(true);15 }16 }17 );1819 widget.open();20 }
The addition of an uploadSignature
parameter on the upload widget means that we are using a signed upload method and the upload details will be signed on a server using the API KEY
to authorize the upload request.
When we take a look at the utils/uploadSignature.js
file, we should see that it makes the request to our server and returns the signed upload signature:
1export function generateSignature(callback, paramsToSign) {2 fetch(`/api/sign`, {3 method: "POST",4 body: JSON.stringify({5 paramsToSign,6 }),7 })8 .then((r) => r.json())9 .then(({ signature }) => {10 callback(signature);11 });12}
Displaying audio files
After handling file uploads to Cloudinary with the signed upload method, we needed a way to render the files on our application so users can listen to it. For this, we used the Cloudinary AdvancedVideo
component exported via the @cloudinary/react
package.
1import { AdvancedVideo } from "@cloudinary/react";23export function AudioPlayer({ video }) {4 return <AdvancedVideo cldVid={video} controls />;5}
And finally, we pass our Twitter space audio ID (from Cloudinary) into the AudioPlayer
component above to render it for users to listen:
1<AudioPlayer id="player" video={getVideo(space.audioId)} />
This is what renders the audio file in our application for users to interact with. It looks like this:
It is worthy to mention that while implementing this part of the application, we tried a couple of things that didn't quite work:
- HLS Streaming for the audio files and
- Cloudinary waveforms
Might investigate it again to see how we can get these to work but If you'd like to check out the web app, you can visit the staging version here with the password spaces123
. Feel free to open a PR if something catches your interest!
In a later post, I'll dive deeper into how we structured the app to handle authentication with Firebase Firestore and organized the user data and content in Cloudinary.