Unlocking Bun: First Party Support for Postgres and S3
Have you ever wished for a smoother, faster way to interact with your databases while building applications? Well, your wishes have been answered! Bun just released an exciting new version that introduces first-party support for Postgres and S3. This update opens the floodgates for developers, allowing for seamless integration and more efficient coding practices.
Why This Update Matters
So, what does first-party support for Postgres and S3 really mean?
- Direct Queries: You can now send queries directly to a Postgres server with ease. This means less overhead and a more streamlined development process. For a deeper understanding of how Postgres works, check out our A Comprehensive Guide to PostgreSQL: Basics, Features, and Advanced Concepts.
- Effortless File Management: Fetching files from S3 is now straightforward, making file storage and retrieval a breeze.
This new functionality not only simplifies coding but also enhances performance, allowing developers to focus on building rather than troubleshooting.
Key Takeaways
- Simplicity: By integrating Postgres and S3 directly into Bun, the complexity of managing separate services is reduced.
- Efficiency: Faster queries and file management mean quicker application builds and deployments.
- Accessibility: Even for those with little experience in app development, like John with an H (our featured prompt engineer), this update makes it easier than ever to create functional applications.
Real-World Applications
Imagine a scenario where you're building a web application that needs to handle both data storage and file management. With this new update:
- You can utilize Bun's support to store user data in Postgres.
- At the same time, you can manage user-uploaded files in S3 seamlessly. This integration not only speeds up your development but also makes your application more robust. If you're interested in building applications that manage complex data, consider exploring Building the Ultimate Auto Space Parking Application for more insights.
Step-by-Step Explanation
Here’s a quick guide on how to get started with the new features:
- Setting Up Postgres: Configure your Postgres server and connect it using Bun’s streamlined setup process.
- Querying Data: Use simple syntax to send queries directly to your Postgres database.
- Integrating S3: Connect to your S3 bucket and start fetching files using Bun’s intuitive commands.
Expert Perspectives
In the video, John with an H demonstrates his journey into this new world of Bun with almost no prior knowledge. His experience is a testament to how user-friendly and accessible these updates are.
"I was amazed at how quickly I could grasp the concepts and start building things I never thought possible!" - John with an H.
Conclusion
Bun's latest update is a game changer for developers at all levels. With first-party support for Postgres and S3, it allows for a more integrated, efficient, and enjoyable coding experience. Whether you're a seasoned developer or just starting out, this new version of Bun empowers you to build cool stuff like never before. So, dive in and start exploring the possibilities today! For those looking to enhance their skills further, consider the Unlocking the Power of Go: A Comprehensive Programming Course for Beginners.
Bun just dropped a new version and it is awesome. The headline is first party support for Postgres and S3. So you can send queries to a Postgres server like this, or fetch all of your files in an S3 bucket like this with nearly zero configuration.
So let's get into it. Okay. I'm to start by using bun to initialize this project and accept all of the defaults.
We can now open this one up in VS code. And then in this index.ts file, we can import the serve function from bun and then call that to create a web server where we can give it some static routes. Like let's say the path slash bun can route.
to a static HTML page for Bun. So let's create a new file called bun.html and just render out a H1 tag that says, hello bunnies. Now, if we save this, import it from our index.ts file.
So import bun from .slash bun.html. And then we also need to declare our async fetch function, which takes our request and allows us to declare server endpoints like dynamic API routes, for example. For now, we're just going to return a new response that says dynamic.
So we're serving our static routes like slash bun, which points to our HTML file. And then any routes not listed here under static will fall through to our fetch function where we're serving a dynamic response. So we can run our web server with bun run and then the name of our file, which is index.ts.
And now over in our browser, if we navigate to localhost over port 3000 slash. we will get our static page for Hello Bunnies. And if we navigate to slash or anything else, we'll get our dynamic response.
I know, epic, right? Okay, let's do something a little bit more interesting. Let's say we have a list of posts in our database and we want to dynamically fetch those posts and then render them here on the page.
So back over in our code, we can pull out. the URL from our request and then grab the path name from a new URL object that will construct from the URL of our request. And then if that path name is equal to the string slash API slash posts, then we want to return a new response containing all of the posts.
So where are we going to get these posts? Well, with the new Postgres helper that we can import from Bun, we can grab those posts from any Postgres server. So running locally on our machine or anywhere on the internet.
And you know who hosts really good Postgres databases? Supabase But we'll get to that. For now, we can simply call this Postgres function with a templated string that contains the query that we want the server to execute.
So in this case, we want to select star or all of the columns from the post table. We then get back a response, which I guess in this case would be our posts. And then since a response can only contain a string value, we want to call json.stringify to turn this json array of posts into one big string.
And then we can provide some additional headers on that response to say that the content-type is of application slash json so that the browser understands that this is a json response and can automatically pass that big string back into a big json object. How do we tell this Postgres function where our Postgres server is hosted on the internet?
Well, we can head over to our browser and navigate to database.new to create a new Supabase project. I'm going to select my Dijon Musters org. My project name is going to be Bunstagram because we're going to be creating an Instagram clone with Bun.
I'm going to automatically generate a password. but I'm going to make sure that I copy this because we're going to need it in a moment. So once you've put that somewhere, say select a region that's closest to you, I'm going to select Oceania Sydney and then create my new project.
This will take a few minutes to set up our project. And then once we land on this page, we can head over to this connect section to grab the connection string for our Postgres database. We don't want this.
direct connection URI, because this is for long lived connections, like a long running server that connects once to our Postgres database and then can send requests back and forth. What we want is the transaction pooler, which is designed for serverless function that quickly spin up, make a request and then shut down, which is exactly what's going to
happen. with our Bun application. So let's copy this URL, head back over to our Bun project where we're going to create a .env file in the root of our project and then declare an environment variable for Postgres
underscore URL, which will set to that value from Supabase. And then you need to manually input your password here. So hopefully you did put it somewhere safe and then we can give that one a save and then do literally.
No more configuration because this Postgres function looks specifically for this Postgres underscore URL environment variable and automatically connects to our Postgres server to send across our select query. So we need some posts to actually be able to select.
So let's head back over to the Supabase dashboard and go over to the table editor. We could manually create this table and specify all of the columns that we want, but we're going to totally cheat and use Supabase AI. So I'm gonna say create a table for posts each.
with a created at and title column and populate with 50 example posts. This snippet looks good. We're creating a table posts with our column and turning on row level security to make sure that only the people we want to have access to this have access.
So I'm happy to give this one a run. And yes, we want to execute this query. However, these titles of our example posts are a little bit boring.
So let's say, please make our blog. post titles a little more interesting. And Supabase AI has absolutely nailed it with the secrets of the universe unraveling the mysteries.
I totally want to read that blog post. So let's give this one a run. And yes, we want to run this query.
And now if we close the base AI and give this one a refresh, we can see our post table has been created along with our super interesting blog post titles. So back over in our bun project. If the path name of the request matches slash API slash posts, then we want to make a request to our Postgres server, selecting all of the posts, which should actually have an
await before it. And then once we have those posts back from Supabase we're returning a big stringified version, but the browser will turn back into one big JSON array of posts. So let's give that a save and restart our web server.
And then back over in the browser, let's navigate to localhost over port 3000 slash API slash posts. And we can see a big mess of all of our posts. We can pretty print them so we can more clearly see that Bun is successfully grabbing back all of the posts from our Postgres database hosted by Supabase.
We can also send this back as HTML. So if we say this is the content type of text slash HTML, and then rather than stringifying our posts, we want to map. over each post and return a templated string with a p tag, the posts title, and then a closing p tag.
This will then be a big array of separate strings with those p tags. So we need to join them together with an empty string. Now, if we save this one and restart our server process and then head back over to the browser and refresh, we see a beautifully presented list of all of our posts rendering
We now have the power of a full Postgres server and we're not even paying for it. So we have tables, we could have relationships between those tables. We could have Postgres functions and triggers.
We can run transactions with our queries. And because we're using Supabase, we also get auth edge functions, file storage, and access to Supabase AI. all bundled in there for free.
So what about connecting our project to S3? Does this mean we need another service? Do we need to spend an entire afternoon trying to work out AWS credentials?
Well, no, because Supabase Storage is actually S3 compatible. So back over in our Bun project, we can connect to S3 nearly as easily as we could Postgres by importing this S3 helper from Bun and then declaring some new environment variables to connect.
to our S3 bucket. So we need an S3 endpoint and S3 access key ID and S3 secret underscore access underscore key. And then the S3 bucket that we want to store and read files from.
So we can get each of these values from our Supabase dashboard under project settings. And then under configuration, we want storage and we can see. this S3 connection section where we've enabled the connection via the S3 protocol.
We can grab our endpoint and paste it into our .emv file. We can also create a new access key. We can give this a description just so we can find it later.
I'm going to call it BUNSTIGRAM and create my access key. This then gives us our access key ID, which we can paste in here, as well as our secret access key, which we want to paste here. And then lastly,
we need a bucket to actually connect to. So if we go over to the storage option and then create a new bucket, we can call this anything we want. I'm going to say images and we can make this bucket public.
But if we want to use our barn application as like a proxy service, so only our barn application has access directly to our S3 bucket, we can leave this one as private and click save. create our bucket.
And now back over in our .env file, we can tell it the name of our S3 bucket is images. Now, annoyingly, there's a little bit of a bug at the moment. This path gets stripped off the endpoint when Bun tries to connect to our S3 service.
The Bun team knows about it, they're fixing it, but for now, a workaround is we can take slash storage slash v1 slash S3 off the end of our S3 endpoint and prepend it to the start of our S3 bucket. So slash storage slash v1.
slash S3 slash images. Now to actually write to our S3 bucket, we can create a new static route for slash upload, which points to an upload form, which we need to create a new HTML file for. So this one will be upload.html.
And here's some code, which I generated using v0 by Versel. If you haven't checked it out, it's amazing. v0.dev.
You give it a text prompt like generate. an upload form where the user can upload a file and specify a title. Use only HTML and CSS.
This looks pretty similar to what I generated earlier, but let's ask a follow up. Say, please make it dark mode. And there you go.
Looks sick. And you can go back and forth with as many modifications as you'd like. The only modification I made to mine was I made the form submit
a post request to the action slash API slash upload, which we can now create a server endpoint to receive that request in our bun application. Actually, before we do that, we will just import our upload form from dot slash. upload.html and actually uploading this file to S3 will need to be dynamic.
So we'll list it here with our other API endpoints. So if the path name matches slash API slash upload, so this is where our upload form is submitting to. And then what we do is grab the form data.
So the image field, which is that file and the title of our image. And then this is just doing some validation to make sure we have an image and we can use that S3 helper. which again is already configured and wired up to our Supabase project because we declared each of those environment variables.
We then create a new like placeholder file with our images name and then wait for that file to be written to S3 giving it the actual file that we want to upload. So now that that file has successfully been uploaded to S3 or Supabase storage, we want to write a new row in our Postgres database for our images table telling it the title of our image and then the path that we can find it in our S3 bucket.
So let's give this one a save. So we need to create an images table in our Postgres database with columns for title and path. So back over in the Supabase dashboard, we can go to the table editor and then create a new table.
Again, this is going to be called images. We want RLS enabled and we want to add a column for title, which is going to be of type. text, want to click this little cog and say we do not want this column to be nullable.
So we always want each image to have a title and then let's add another column for path. Again, this is going to be text and we also don't ever want this to be the value null. Let's click save to create our images table and now back over in our bun project, let's restart our server and head over to localhost over port 3000 slash upload.
We can then choose a title for our image. So let's say it's our profile picture and we can choose a file and we can find an image to upload. I'm going to upload this profile.png picture and click upload.
And we see that new row from our Postgres database with the title of our picture, as well as the path to find it in our S3 bucket. And if we look at the Supabase dashboard, and go to storage and look at our images bucket, we can see profile.png has been successfully uploaded.
So now to access this image in our S3 bucket and use our bun server as a proxy to be able to view it, we need to declare a new endpoint. So this one is going to be if the path name includes slash API slash image slash, then we want to take whatever is after that last slash. So the path
to our image in our S3 bucket and put it in a new variable called path. And then accessing files in our S3 bucket is as simple as calling S3.file, giving it the path to our file. And again, this is already configured to talk specifically to our bucket.
And then we can just return a new response with that file. And this will automatically create a new signed URL where the user is able to access that image from our S3 bucket, but only while that... pre-signed URL is active.
So Bun is acting as a proxy. It's doing all of the signing and talking to S3 stuff and giving our user a simple interface to access that file in a secure way. So let's first save this, restart our web server and head over to localhost over port 3000 slash API slash image slash the name of our image, which is profile.png.
We see the text dynamic. So something is wrong because it's fallen through to this response rather than this one, which we can see is because we put this if statement in the wrong place. So this is inside the block for slash API slash upload.
So we just want to cut this from there and move it underneath this closing bracket. Now, if we save, restart our web server and navigate to the same URL, so slash API slash image slash profile.png, we'll see this beautiful picture. And if we have a look,
At our URL, we've been redirected specifically to this bucket with this file and all the other pre-signed URL stuff automatically appended for us. Thank you, Bun. So the last thing I wanted to show is how do we render out a big gallery of all of the images in our S3 bucket?
So back over in our Bun application, we're going to need a new dynamic. server or API route. So if the path name matches slash images, then we want to go up to Postgres and select all of the images.
So this will be an array of all of the rows in our images table. We're then returning a new response where we're calling this render gallery function, which is just a convenience function I declared above to just handle all of the other HTML that we don't really care about and all of the styles and
basically everything that wraps around the actual array of images, which we're going to pass in as this slot. So basically this just gives us the HTML shell. Then we want to iterate over each of the images and render out a block of HTML where we have an a tag that navigates to that specific item.
It has an image which points to our API route for that specific image. So that will use our other server endpoint to go and grab this from S3 for each of our images. And then
we have a little overlay that pops up when we hover that shows the images title and just a static amount of how many likes and comments it has. We can implement that later. So again, if we restart our web server and head over to localhost over port 3000 slash images, we'll see that one image that's actually in our S3 bucket.
And if we hover over it, we can see its name and the amount of likes and comments. Let's go and upload another file. So let's navigate.
to slash upload the name of this one can be creepy and we're going to upload this creepy version and click upload. We can see that's been successfully written to our Postgres database. So maybe it would be better to actually redirect the user to that slash images page rather than showing this information, but I'll leave that one up to you.
And so here we have our full Bunstagram gallery of our normal profile and then this creepy version and our quick. implementation of Bunstagram is finished and awesome. So thanks to the Postgres and S3 helpers that are now built into Bun, we were able to throw this application together pretty damn quickly.
The other thing that helped us a lot was AI. So if you want to check out just how powerful the Supabase AI Assistant is, then check out this video right here. John with an H, our prompt engineer, sees just how far he can get with
almost zero knowledge about how to build an application. But until next time, keep building cool stuff.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
![Unlocking the Unlimited Power of Cursor: Boost Your Productivity!](https://img.youtube.com/vi/A9BiNPf34Z4/default.jpg)
Unlocking the Unlimited Power of Cursor: Boost Your Productivity!
Discover how to harness Cursor for ultimate productivity, from controlling apps to optimizing workflows!
![A Comprehensive Guide to PostgreSQL: Basics, Features, and Advanced Concepts](https://img.youtube.com/vi/qw--VYLpxG4/default.jpg)
A Comprehensive Guide to PostgreSQL: Basics, Features, and Advanced Concepts
Learn PostgreSQL fundamentals, features, and advanced techniques to enhance your database management skills.
![The Future of AI-Assisted Coding: Insights from the Cursor Team](https://img.youtube.com/vi/oFfVt3S51T4/default.jpg)
The Future of AI-Assisted Coding: Insights from the Cursor Team
Explore how AI is transforming programming with insights from the Cursor team, including Michael Truell, Arvid Lunark, and Aman Sanger.
![Unlocking the Power of Go: A Comprehensive Programming Course for Beginners](https://img.youtube.com/vi/un6ZyFkqFKo/default.jpg)
Unlocking the Power of Go: A Comprehensive Programming Course for Beginners
Learn Go programming with our comprehensive course for beginners. Master the fundamentals and build real-world projects!
![The Revolutionary Impact of Claude AI: A Game-Changer for Software Engineering](https://img.youtube.com/vi/DVRg0daTads/default.jpg)
The Revolutionary Impact of Claude AI: A Game-Changer for Software Engineering
Explore how Claude AI surpasses GPT-4 and revolutionary features that redefine productivity.
Most Viewed Summaries
![Pamamaraan ng Pagtamo ng Kasarinlan sa Timog Silangang Asya: Isang Pagsusuri](https://img.youtube.com/vi/rPneP-KQVAI/default.jpg)
Pamamaraan ng Pagtamo ng Kasarinlan sa Timog Silangang Asya: Isang Pagsusuri
Alamin ang mga pamamaraan ng mga bansa sa Timog Silangang Asya tungo sa kasarinlan at kung paano umusbong ang nasyonalismo sa rehiyon.
![Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas](https://img.youtube.com/vi/nEsJ-IRwA1Y/default.jpg)
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
![A Comprehensive Guide to Using Stable Diffusion Forge UI](https://img.youtube.com/vi/q5MgWzZdq9s/default.jpg)
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
![Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas](https://img.youtube.com/vi/QGxTAPfwYNg/default.jpg)
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.
![Imperyalismong Kanluranin: Unang at Ikalawang Yugto ng Pananakop](https://img.youtube.com/vi/fJP_XisGkyw/default.jpg)
Imperyalismong Kanluranin: Unang at Ikalawang Yugto ng Pananakop
Tuklasin ang kasaysayan ng imperyalismong Kanluranin at mga yugto nito mula sa unang explorasyon hanggang sa mataas na imperyalismo.