Why no real-time response streaming?

Couldn't figure it out, got bored, arguably gimmicky in the first place.

I also do some checks and cleanup on the text before displaying it which is much easier to do sans streaming.

Are you going to steal my API key?

No. It's stored only in your browser and sent to an Express route that directly pings openAI's API. You should always have usage limits set up inside your openAI account anyway.

Isn't this just a clone of <other AI writing tool>


Can I rip off your work?


Tool XYZ is better and has more features

Yes. This one's free and took me 2 days to make.

Will you add <feature>?

Prob not. The very short, very tentative list of things I'd like to add to this one includes:

What did you use to build this?

Express.js, plain old HTML/CSS, and vanilla javascript. Honestly didn't know Express or Next existed before I started working on this.

Domain from Namecheap, hosting via Heroku behind Cloudflare.

Unrelatedly, I finally figured out how deploying from Github works.

85% of the code boilerplate and much of the more complex functionality I built iteratively using GPT-4.

I looked at your code and it's really bad


Are you a programmer?

Nope. I almost finished a CS minor in undergrad and know enough to hack together JS and Python scripts when needed (the devs I work with love me).

Who are you?

29 year old guy with a full time job working at a digital marketing agency.

I want to get in touch

50/50 chance I don't respond but @rymaake or [email protected]