Rendered at 10:49:48 GMT+0000 (Coordinated Universal Time) with Netlify.
cschneid 14 hours ago [-]
Can this take vague ideas, do iterative design with me, and breakdown tasks to then pass off to agents to build?
I was playing with a very similar project recently that was more focused on a high level input ("Build a new whatever dashboard, <more braindump>") and went back and forth with an agent to clarify and refine. Then broke down into Epics/Stories/Tasks, and then handed those off automatically to build.
The workflow then is iterating on those high level requests. Heavily inspired by the dark factory posts that have been making the rounds recently.
From a glance, it seems like this is designed so that I write all the tasks myself? Does it have any sort of coordination layer to manage git, or otherwise keep agents from stepping on each other?
bumpyclock 14 hours ago [-]
I've been working on a similar project https://github.com/BumpyClock/tasque . Tracks tasks (Epics, tasks, subtasks) with deps between them. So I plan for an hour or so and then when I walk away from my desk I had the tasks for the agents to code and then I can come back and verify.
Edit: minor note, one additional thing that is in the skill that the tool installs is to direct the agent to create follow up tasks for any bugs or refactor opportunities that it encounters. I find this let's the agent scratch that itch of they see something but instead of getting sidetracked and doing that thing, they create a follow up tasks that I can review later and they can move on.
meisnerd 14 hours ago [-]
[dead]
zingar 14 hours ago [-]
Could you tell us what makes this different from other agent orchestration software?
Also I’m struggling to understand the significance of the 193 tests. Are these to validate the output of the agents?
If they’re just there to prevent regressions in your code, the size of a test suite is not usually a selling point. In particular, for a product this complicated, 193 is a small number of tests, which either means each test does a lot (probably too much) or you’re lacking coverage. Either way I wouldn’t advertise “193 tests”.
brickers 3 hours ago [-]
I agree with what you’re saying. However given the reputation of openclaw (and I presume many other vibe coded spaghetti monsters) I appreciate the signal “I care about quality”.
meisnerd 14 hours ago [-]
[dead]
ge96 16 hours ago [-]
Interesting that most of it is markdown
well except the mission control folder
code is mix of old and new style JS eg. function vs. =>
at a cursory glance the UI has way too many buttons/features but probably makes sense when you're in the weeds/using it, it makes sense the more I look at it though
meisnerd 14 hours ago [-]
[dead]
xiphias2 15 hours ago [-]
Congrats! Great try!
I have a different view point on what to automate and I'm working differently with agents, but I much prefer seeing projects like this on HN to just product announcements.
meisnerd 14 hours ago [-]
[dead]
nikolas_sapa 6 hours ago [-]
wow bro, more people need to hear about this. I dont have access yet to Claude Code but I use the free claude for coding tasks and it is still a headache. so when I will get Claude Code will use it for sure. also why dont you have a landing page that lead to the git so you can get more traffic?
I was playing with a very similar project recently that was more focused on a high level input ("Build a new whatever dashboard, <more braindump>") and went back and forth with an agent to clarify and refine. Then broke down into Epics/Stories/Tasks, and then handed those off automatically to build.
The workflow then is iterating on those high level requests. Heavily inspired by the dark factory posts that have been making the rounds recently.
From a glance, it seems like this is designed so that I write all the tasks myself? Does it have any sort of coordination layer to manage git, or otherwise keep agents from stepping on each other?
Edit: minor note, one additional thing that is in the skill that the tool installs is to direct the agent to create follow up tasks for any bugs or refactor opportunities that it encounters. I find this let's the agent scratch that itch of they see something but instead of getting sidetracked and doing that thing, they create a follow up tasks that I can review later and they can move on.
Also I’m struggling to understand the significance of the 193 tests. Are these to validate the output of the agents?
If they’re just there to prevent regressions in your code, the size of a test suite is not usually a selling point. In particular, for a product this complicated, 193 is a small number of tests, which either means each test does a lot (probably too much) or you’re lacking coverage. Either way I wouldn’t advertise “193 tests”.
well except the mission control folder
code is mix of old and new style JS eg. function vs. =>
at a cursory glance the UI has way too many buttons/features but probably makes sense when you're in the weeds/using it, it makes sense the more I look at it though
I have a different view point on what to automate and I'm working differently with agents, but I much prefer seeing projects like this on HN to just product announcements.