Rendered at 15:53:57 GMT+0000 (Coordinated Universal Time) with Netlify.
offmycloud 19 hours ago [-]
The GET request method is supposed to be safe.
"Request methods are considered 'safe' if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource."
-RFC 9110 section 9.2.1
In practice many GET requests don't adhere to this spec. For example, when you load a page, your "view" generally changes lots of things on the backend. Those changes come back to you in ways too: for example, consider view counts on Youtube videos or X posts.
sandeepkd 17 hours ago [-]
These are just conventions, one can pretty much do whatever they want in their applications. At the same time convention has its own advantages (most of the times, think about code maintenance). Its still along the excepted line as long as the mutations are side effects of the GET request. Somewhere down the line the intent is to provide a separation to easily understand the systems.
noman-land 10 hours ago [-]
The GET request is issued to the web server for content of the HTML page and then all the scripts. Then the scripts issue POST/PUT requests to the analytics server updating the count.
For GETadb, it's a conflicting sell. The people that need "a db solved by AI" and fully abstracted are using app builders no? lovable, v0, manus. The people the are closer to the code and need an instant db would look to sqlite, render, supabase, neon. I'm all for another option, but then there's the realization that instandb is a new kind of db and I need to research into the value-prop vs the initial persona: "just solve my db problem with AI".
disclaimer: I'm a professional developer, doing an honest review. I may play around with it separately, later. So this marketing site did its job!
stopachka 18 hours ago [-]
One place where a tool like GETadb can be helpful, is when you as a developer wanted to build a quick demonstration. For example one of co-founders Joe saw a tweet about how VCs were ranked. He pointed Instant to an agent, made a quick polling app, and got 600 votes [1].
We hope delightful experiences like that then prod hackers to dive deeper and use Instant for startups.
When you're experimenting on ideas, it's really nice when time to hello world is near zero. It's also nice when you aren't limited to a certain number of dbs or your projects get paused. This makes it so much more delightful to hack.
When you do want to get closer to the code, we think Instant provides a nicer abstraction for working with agents and getting deligthful experiences like a sync engine out of the box
apsurd 18 hours ago [-]
You are right about the activation energy to throw up anything new really comes down to a hosted db that won't rot. My initial experience with supabase was incredible in that sense: any client-side framework I wanted, deployed statically while still integrating with full Postgres, for free!
Problem is supabase rots. And turning that project into anything meaningful is basically undoing everything you got for free up front.
My solution today is sqlite. I'm not diehard typescript so it turns out traditional backend apps like rails running sqlite on tiny/free hardwire is pretty nice.
That said, client-side runtime will always be alluring because it can be deployed statically. So you've got something there that I'll check out.
trestacos 17 hours ago [-]
As someone looking to use supabase for a project, I'd love to learn more about what you mean by supabase rots - did you run into scaling problems?
For toy apps and initial prototypes the problem is they aren't going to get used so the rot is that they will be in a good-enough state to come back to when you get the time. With supabase the drop-in auth completely broke at some point, probably just a deprecation I didn't keep up with.
The postgres instance spins down when you aren't using it, which is understandable and I will say it works, it's just Postgres and you get the database dump if you need to move or come back a year later.
The nuance here is that you get the raw connection string + the postgREST API which all makes sense but you're choosing full cloud/client mode which is completely different from if you just went with the raw connection string behind a server layer. I kinda had to work through all of that learning on my own. The full client mode trade-off is that you'll be doing everything with that pattern, handling migrations, security, auth, it's just kinda... it's a whole thing. The public postgREST and row level security is a different paradigm.
as a professional dev, I would have just chosen the raw connection string and managed the database from the server until I outgrew it and I'd have the dev workflow already, it's just a Postgres db. Or sqlite to start, same reasoning it's all the same dev workflow, the problem is the cloud-hosting transition, which is why fully-managed cloud db accessible from an edge/client runtime is so alluring, but you're trading two very different ergonomics.
I'm thought-dumping, gotta run, hope this helps.
runako 18 hours ago [-]
> we get meta.ai to build an app inside the artifact preview
Is this the kind of use case that is seen as valuable?
I joked a while back that LLM-brain was going to have people building bespoke apps on each HTTP request, and people thought I was exaggerating!
stopachka 18 hours ago [-]
> Is this the kind of use case that is seen as valuable?
I think it could be. Consider an argument like this:
It's valuable to ask ChatGPT questions and receive text responses. Some of the responses are more valuable when they don't just return text, but some markup: bolding, adding visualizations etc. Why can't some responses be more valuable if they return little apps?
One place where I've wanted this myself are with using LLMs for long-running goals I have. For example, I do my blood work about once a year, and I use the results to make changes and track. For a long time I had a long chat thread with ChatGPT. Now I have a little app instead.
An extreme version of this starts to turn responses into more and more fully-fledged apps. I did an experiment recently with creating a personal finance app. I found customizing the app to my specific needs made it much more valuable to me then generic personal finance apps, which have much more effort put it, but aren't tailored to my needs [^1]
I’ve tried this and I like it. I’d like a platform like Instant but with the addition of a web based text editor and Claude Code / Codex terminal (provide own subscription/api key) that lets you create and edit (create previews, then promote to production) the app from the same interface, alongside the managed db.
wewewedxfgdf 19 hours ago [-]
So, give your LLM a URL and tell it to follow the instructions there?
Err, no thanks.
stopachka 19 hours ago [-]
Most LLMs in practice already read URLs. If you ask them a question they don't know, they will search and read pages.
debarshri 20 hours ago [-]
The agent thing is going a bit out of hand here.
stopachka 20 hours ago [-]
Admittedly stateful GET requests are heretical, but it may be the future!
dennisy 20 hours ago [-]
This is very cool!
But why do we need this? An agent can just have a local DB using SQLite for example.
stopachka 20 hours ago [-]
Two reasons this could make sense:
1. With this, agents can actually deploy a full backend with their credentials [^1].
2. If your agent ever wants to add auth, or real-time presence, or file uploads, or streams, they'll be able to do that too
[^1] Alas we don't offer static site hosting, so to push the website you would need to use something like a vercel cli.
noitpmeder 8 hours ago [-]
Are neither of those things possible with a sqlite backend??? Why would one ever reach for this bespoke database tech
aleda145 19 hours ago [-]
I appreciate this part of the agent instructions: `AESTHETICS ARE VERY IMPORTANT. All apps should LOOK AMAZING and have GREAT FUNCTIONALITY!`
stopachka 19 hours ago [-]
Thank you! Yeah, it is surprising how magic words can impact the performance of LLMs
swyx 19 hours ago [-]
do you actually know or are you just guessing
nezaj 18 hours ago [-]
Funny enough we added this in awhile back when it seemed more conclusive that this does matter.
Everything else is the same. Will let y'all be the judge which is better.
Both where made in one-shot with this prompt:
Create a habit tracking app where users can create habits, mark daily completions, and visualize streaks. Include features for setting habit frequency (daily/weekly), viewing completion calendars, and tracking overall progress percentages.
swyx 18 hours ago [-]
hard to try bc with-aes has a login wall lol
nezaj 17 hours ago [-]
Agreed. It is curious how the agent got nudged to add auth in that one.
I did another ad-hoc but this time I added "Use guest auth" to the prompt. This way you don't need to enter an email. Full prompt below
Create a habit tracking app where users can create habits, mark daily completions, and visualize streaks. Include features for setting habit frequency (daily/weekly), viewing completion calendars, and tracking overall progress percentages. Use guest auth
The biggest problem I see with vibecoded apps attached to a db is that the db is configured with exactly 0 access control (even if whatever backend does support it), and anyone can turn up and SELECT * FROM users, or even DROP TABLE users. How do you mitigate this?
stopachka 18 hours ago [-]
Good question. Two ways:
1. For the users table specifically, we have a default rule that says `"view": "auth.id == data.id"`. This way even if the the user (or AI) did not set access controls, user data is protected by default.
2. In the instructions file given to the agent (https://www.getadb.com/provision/new), we specifically mention permissions and how to push them. We found this prods the agent to push perms.
lucb1e 19 hours ago [-]
I thought this would be something about getting (downloading?) the Android Debug Bridge tool (adb) until I read further. Might want to capitalize DB as well (GETaDB), at least from my pov
19 hours ago [-]
stopachka 19 hours ago [-]
Ah, good point. We can't change the title now though.
danpalmer 16 hours ago [-]
> Remember! AESTHETICS ARE VERY IMPORTANT. All apps should LOOK AMAZING
Why are your database instructions giving instructions about the UI design?
stopachka 11 hours ago [-]
Instant gives you a database, but it also gives you a sync engine that you can use in the frontend. We included this instruction because you would ideally use this to build an app.
danpalmer 4 hours ago [-]
Right but it's not your responsibility to direct the UI. Your responsibility is backend services, possibly at a stretch, the architecture of apps as they relate to using those backend services, but it's most definitely not the aesthetics.
What if the app is headless and the LLM tries to stick a UI on it? What if the app is a TUI and the LLM gets stuck on terminal fonts? What if my UI aesthetic is grungy hackercore and the LLM tries to make it look like every other Tailwind website?
This criticism/feedback is less about what's written, and more about why it was deemed appropriate. You're getting direct input to the development process of your customer's products, and you're using that responsibility to... make pointless comments about design?
reassess_blind 14 hours ago [-]
Is there an easy way to export all data to a format friendly with Postgres?
stopachka 11 hours ago [-]
Currently we recommend folks to write a script with the admin sdk. Efficient import / export is on the roadmap!
> Generate a random UUID yourself and use a different UUID each time.
LLMs are terrible at this. If you are relying on this to prevent collisions, it will fail badly.
stopachka 11 hours ago [-]
The UUID doesn’t actually affect the response. Every GET request still generates unique credentials each time, no matter what the value is that passes to /provision/<uuid>
We added it to help the app builders that do a lot of caching get unique responses. Turns out even if you set no-store cache headers, some app builders cache the pages. We tested this idea with those app builders and saw that they did generate uuids each time.
"Request methods are considered 'safe' if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource." -RFC 9110 section 9.2.1
https://www.rfc-editor.org/rfc/rfc9110.html#name-safe-method...
In practice many GET requests don't adhere to this spec. For example, when you load a page, your "view" generally changes lots of things on the backend. Those changes come back to you in ways too: for example, consider view counts on Youtube videos or X posts.
http, not https?
For GETadb, it's a conflicting sell. The people that need "a db solved by AI" and fully abstracted are using app builders no? lovable, v0, manus. The people the are closer to the code and need an instant db would look to sqlite, render, supabase, neon. I'm all for another option, but then there's the realization that instandb is a new kind of db and I need to research into the value-prop vs the initial persona: "just solve my db problem with AI".
disclaimer: I'm a professional developer, doing an honest review. I may play around with it separately, later. So this marketing site did its job!
We hope delightful experiences like that then prod hackers to dive deeper and use Instant for startups.
[1] https://x.com/JoeAverbukh/status/2028544576206860697
When you do want to get closer to the code, we think Instant provides a nicer abstraction for working with agents and getting deligthful experiences like a sync engine out of the box
Problem is supabase rots. And turning that project into anything meaningful is basically undoing everything you got for free up front.
My solution today is sqlite. I'm not diehard typescript so it turns out traditional backend apps like rails running sqlite on tiny/free hardwire is pretty nice.
That said, client-side runtime will always be alluring because it can be deployed statically. So you've got something there that I'll check out.
For toy apps and initial prototypes the problem is they aren't going to get used so the rot is that they will be in a good-enough state to come back to when you get the time. With supabase the drop-in auth completely broke at some point, probably just a deprecation I didn't keep up with.
The postgres instance spins down when you aren't using it, which is understandable and I will say it works, it's just Postgres and you get the database dump if you need to move or come back a year later.
The nuance here is that you get the raw connection string + the postgREST API which all makes sense but you're choosing full cloud/client mode which is completely different from if you just went with the raw connection string behind a server layer. I kinda had to work through all of that learning on my own. The full client mode trade-off is that you'll be doing everything with that pattern, handling migrations, security, auth, it's just kinda... it's a whole thing. The public postgREST and row level security is a different paradigm.
as a professional dev, I would have just chosen the raw connection string and managed the database from the server until I outgrew it and I'd have the dev workflow already, it's just a Postgres db. Or sqlite to start, same reasoning it's all the same dev workflow, the problem is the cloud-hosting transition, which is why fully-managed cloud db accessible from an edge/client runtime is so alluring, but you're trading two very different ergonomics.
I'm thought-dumping, gotta run, hope this helps.
Is this the kind of use case that is seen as valuable?
I joked a while back that LLM-brain was going to have people building bespoke apps on each HTTP request, and people thought I was exaggerating!
I think it could be. Consider an argument like this:
It's valuable to ask ChatGPT questions and receive text responses. Some of the responses are more valuable when they don't just return text, but some markup: bolding, adding visualizations etc. Why can't some responses be more valuable if they return little apps?
One place where I've wanted this myself are with using LLMs for long-running goals I have. For example, I do my blood work about once a year, and I use the results to make changes and track. For a long time I had a long chat thread with ChatGPT. Now I have a little app instead.
An extreme version of this starts to turn responses into more and more fully-fledged apps. I did an experiment recently with creating a personal finance app. I found customizing the app to my specific needs made it much more valuable to me then generic personal finance apps, which have much more effort put it, but aren't tailored to my needs [^1]
[^1]: more on this experiment here: https://x.com/stopachka/status/2040982623636607009
Err, no thanks.
But why do we need this? An agent can just have a local DB using SQLite for example.
1. With this, agents can actually deploy a full backend with their credentials [^1].
2. If your agent ever wants to add auth, or real-time presence, or file uploads, or streams, they'll be able to do that too
[^1] Alas we don't offer static site hosting, so to push the website you would need to use something like a vercel cli.
But I was curious and just did an adhoc eval.
Here's a version with the aesthetic line included
https://with-aes.vercel.app/
Here's a version without the line
https://wo-aes.vercel.app/
Everything else is the same. Will let y'all be the judge which is better.
Both where made in one-shot with this prompt:
Create a habit tracking app where users can create habits, mark daily completions, and visualize streaks. Include features for setting habit frequency (daily/weekly), viewing completion calendars, and tracking overall progress percentages.
I did another ad-hoc but this time I added "Use guest auth" to the prompt. This way you don't need to enter an email. Full prompt below
Create a habit tracking app where users can create habits, mark daily completions, and visualize streaks. Include features for setting habit frequency (daily/weekly), viewing completion calendars, and tracking overall progress percentages. Use guest auth
Aesthetic version: https://with-aes-guest-auth.vercel.app/
Non-aesthetic version: https://wo-aes-guest-auth.vercel.app/
I'd give the edge to the aesthetic one.
1. For the users table specifically, we have a default rule that says `"view": "auth.id == data.id"`. This way even if the the user (or AI) did not set access controls, user data is protected by default.
2. In the instructions file given to the agent (https://www.getadb.com/provision/new), we specifically mention permissions and how to push them. We found this prods the agent to push perms.
Why are your database instructions giving instructions about the UI design?
What if the app is headless and the LLM tries to stick a UI on it? What if the app is a TUI and the LLM gets stuck on terminal fonts? What if my UI aesthetic is grungy hackercore and the LLM tries to make it look like every other Tailwind website?
This criticism/feedback is less about what's written, and more about why it was deemed appropriate. You're getting direct input to the development process of your customer's products, and you're using that responsibility to... make pointless comments about design?
> Generate a random UUID yourself and use a different UUID each time.
LLMs are terrible at this. If you are relying on this to prevent collisions, it will fail badly.
We added it to help the app builders that do a lot of caching get unique responses. Turns out even if you set no-store cache headers, some app builders cache the pages. We tested this idea with those app builders and saw that they did generate uuids each time.