https://www.inclusivecolors.com/
- You can precisely tweak every shade/tint so you can incorporate your own brand colors. No AI or auto generation!
- It helps you build palettes that have simple to follow color contrast guarantees by design e.g. all grade 600 colors have 4.5:1 WCAG contrast (for body text) against all grade 50 colors, such as red-600 vs gray-50, or green-600 vs gray-50.
- There's export options for plain CSS, Tailwind, Figma, and Adobe.
- It uses HSLuv for the color picker, which makes it easier to explore accessible color combinations because only the lightness slider impacts the WCAG contrast. A lot of design tools still use HSL, where the WCAG contrast goes everywhere when you change any slider which makes finding contrasting colors much harder.
- Check out the included example open source palettes and what their hue, saturation and lightness curves look like to get some hints on designing your own palettes.
It's probably more for advanced users right now but I'm hoping to simplify it and add more handholding later.
Really open to any feedback, feature requests, and discussing challenges people have with creating accessible designs. :)
My partner shares our journey on X (@hustle_fred), while I’ve been focused on building the product (yep, the techie here :). We’re excited to have onboarded 43 users in our first month, and we're looking forward to getting feedback from the HN community!
Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.
That seemed solvable.
Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.
Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.
We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.
Since last month’s “What are you working on?” post:
- 4 more products have been fully funded (now 10 total!)
- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)
- 6 results published, 4 in progress
The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.
Browse funded tests, propose your own, or just follow along: https://laboratory.love
https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.
Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.
My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!
Of course, you can't really make that much money off of this... but will still be fun and impactful :)
It has some rough edges, but I use it a ton and get a lot of value out of it.
last month’s “what are you working on” thread impulsed me to upload this game to itch and 1 month later, i’ve got a small community, lots of feedback and iterations. It brought a whole new life to a project that was on the verge of abandoning.
So, I’m really grateful for this thread. https://explodi.itch.io/microlandia
- A front-end library that generates 10kb single-html-file artifacts using a Reagent-like API and a ClojureScript-like language. https://github.com/chr15m/eucalypt
- Beat Maker, an online drum machine. I'm adding sample uploads now with a content accessible storage API on the server. https://dopeloop.ai/beat-maker
- Tinkering with Nostr as a decentralized backend for simple web apps.
It is a tool that lets you create whiteboard explainers.
You can prompt it with an idea or upload a document and it will create a video with illustrations and voiceover. All the design and animations are done by using AI apis, you dont need any design skills.
Here is a video explainer of the popular "Attention is all you need" paper.
https://www.youtube.com/watch?v=7x_jIK3kqfA
Would love to hear some feedback
It’s an iOS app to help tracking events and stats about my day as simple dots. How many cups of coffee? Did I take my supplements? How did I sleep? Did I have a migraine? Think of it like a digital bullet journal.
Then visualizing all those dots together helps me see patterns and correlations. It’s helped me cut down my occurrence of migraines significantly. I’m still just in the public beta phase but looking forward to a full release fairly soon.
Would love to hear more feedback on how to improve the app!
This month doubling down on a small house cleaning business that I acquired - https://shinygoclean.com/
- Instead of code, seems like SOPs have become new love language. - Code obeys logic. People obey trust. That’s the real debugging. Still learning!
I'm calling it a "Micro Functions as a Service" platform.
What it really is, is hosted Lua scripts that run in response to incoming HTTP requests to static URLs.
It's basically my version of the old https://webscript.io/ (that site is mostly the same as it was as long as you ignore the added SEO spam on the homepage). I used to subscribe to webscript and I'd been constantly missing it since it went away years ago, so I made my own.
I mostly just made this for myself, but since I'd put so much effort into it, I figure I'm going to try to put it out there and see if anyone wants to pay me to use it. Turns out there's a _lot_ of work that goes into abuse prevention when you're code from literally anyone on the internet, so it's not ready to actually take signups yet.
https://github.com/jakeroggenbuck/kronicler
This is why I wrote kronicler to record performance metrics while being fast and simple to implement. I built my own columnar database in Rust to capture and analyze these logs.
To capture logs, `import kronicler` and add `@kronicler.capture` as a decorator to functions in Python. It will then start saving performance metrics to the custom database on disk.
You can then view these performance metrics by adding a route to your server called `/logs` where you return `DB.logs()`. You can paste your hosted URL into the settings of usekronicler.com (the online dashboard) and view your data with a couple charts. View the readme or the website for more details for how to do this.
I'm still working on features like concurrency and other overall improvements. I would love some feedback to help shape this product into something useful for you all.
Thanks! - Jake
- No sign-up, works entirely in-browser
- Live PDF preview + instant download
- VAT EU support
- Shareable invoice links
- Multi-language (10+) & multi-currency
- Multiple templates (incl. Stripe-style)
- Mobile-friendly
GitHub: https://github.com/VladSez/easy-invoice-pdf
Would love feedback, contributions, or ideas for other templates/features.
The insight: your architecture diagram shouldn't be a stale PNG in Confluence. It should be your war room during incidents.
Available as both web app and native desktop.
It's already working, and slightly faster than the CPU version, but that's far from an acceptable result. The occupancy (which is a term I first learned this week) is currently at a disappointing 50%, so there's a clear target for optimisation.
Once I'm satisfied with how the code runs on my modest GPU at home, the plan is to use some online GPU renting service to make it go brrrrrrrrrr and see how many new elements I can find in the series.
Drones are real bastards - there's a lot of startups working on anti drone systems and interceptors, but most of them are using synthetic data. The data I'm collecting is designed to augment the synthetic data, so anti drone systems are closer to field testing
I'm putting a bunch of security tools / data feeds together as a service. The goal is to help teams and individuals run scans/analysis/security project management for "freemium" (certain number of scans/projects for free each month, haven't locked in on how it'll pan out fully $$ wise).
I want to help lower the technical hurdles to running and maintaining security tools for teams and individuals. There are a ton of great open source tools out there, most people either don't know or don't have the time to do a technical deep dive into each. So I'm adding utilities and tools by the day to the platform.
Likewise, there's a built in expert platform for you to get help on your security problems built into the system. (Currently an expert team consisting of [me]). Longer term, I'm working on some AI plugins to help alert on CVEs custom to you, generate automated scans, and some other fun stuff.
https://meldsecurity.com/ycombinator (if you're interested in free credits)
Still working on growing the audience.
Some are small tech jokes, while others were born from curiosity to see how LLMs would behave in specific scenarios and interactions.
I also tried to use this collection of experiments as a way to land a new job, but I'm starting to realize it might not be serious enough :)
Happy to hear what you think!
https://youtu.be/ZXXJrwNGh8A?t=36 shows wavelengths from around 200nm to 1000nm, but I've been using UV+IR+White LED lights currently and need to try a single broadband light source.
As well as playing with a very simple 3D printed canon ef mounted night vision tube - https://www.anfractuosity.com/projects/night-vision-tube-mou...
I started this out of frustration that there is no good tool I could use to share photos from my travel and of my kids with friends and family. I wanted to have a beautiful web gallery that works on all devices, where I can add rich descriptions and that I could share with a simple link.
Turned out more people wanted this (got 200+ GitHub stars for the V1) so I recently released the V2 and I'm working on it with another dev. Down the road we plan a SaaS offer for people that don't want to fiddle with the CLI and self-host the gallery.
Write a dev blog in Word format using Tritium, jot down bugs or needs, post blog, improve and repeat.
We have a fun group working on it on Discord (find the discord invite in the How To)
Haunted house trope, but it's a chatbot. Not done yet, but it's going well. The only real blocker is that I ran into the parental controls on the commercial models right away when trying to make gory images, so I had to spin up my own generators. (Compositing by hand definitely taking forever).
https://github.com/skanga/Conductor
Conductor is a LLM agnostic framework for building sophisticated AI applications using a subagent architecture. It provides a robust platform for orchestrating multiple specialized AI agents to accomplish complex tasks, with features like LLM-based planning, memory persistence, and dynamic tool use.
It provides a robust and flexible platform for orchestrating multiple specialized AI agents to accomplish complex tasks. This project is inspired by the concepts outlined in "The Rise of Subagents" by Phil Schmid at https://www.philschmid.de/the-rise-of-subagents and it aims to provide a practical implementation of this powerful architectural pattern.
Working on faceted search for logs and CLI client now and trying to share my progress on X.
- 30k requests/month for free
- simple, stable, and fast API
- MCP Server for AI-related workloads
https://apu.software/truegain/
Then it’s on to the next project.
My first career was in sales. And most of the time these interactions began with grabbing a sheet of paper and writing to one another. I think small LLMs can help here.
Currently making use of api’s but I think small models on phones will be good enough soon. Just completed my MVP.
a tool to help California home owners to lower their property taxes. This works for people who bought in the past years low interest environment and are overpaying in taxes because of that.
Feel free to email me, if you have questions: phl.berner@gmail.com
Building a new layer of hyper-personalization over the web. Instead of generating more content, it helps you reformat and interact with what already exists, turning any page, paper, or YouTube video into a summary, mind-map, podcast, infographic or chat.
The broader idea is to make the web adaptive to how each person thinks and learns.
Next in the plans is adding more models and compare which one gives better results.
To provide trading insights for users.
Last month was better and this month, well, I can't concentrate for long and I distract very easily, but I seem to be able to do more with what I have, and a small sense of ambition that I might be able to do bigger things, might not need to drop out of tech and get a simple job, etc., is returning.
I am trying to use this inhibited, fractured state to clarify thoughts about useless technology and distractions, and about what really matters, because (without wishing to sound haughty) I used to be unusually good at a lot of tech stuff, and now I am not. It is sobering but it is also an insight into what it might be like to be on the outside of technology bullshit, looking in.
man, myself needs work
AI sprite animator for 2D video games.
Merchants who want to sell on Etsy or Shopify either have to pay a listing fee or pay per month just to keep an online store on the web. Our goal is to provide a perpetually free marketplace that is powered solely off donations. The only fees merchants pay are the Stripe fees, and it's possible that at some volume of usage we will be able to negotiate those down.
You can sell digital goods as well as physical goods. Right now in the "manual onboarding" phase for our first batch of sellers.
For digital goods, purchasers get a download link for files (hosted on R3).
For physical goods, once a purchase comes through, the seller gets an SMS notification and a shipping label gets created. The buyer gets notified of the tracking number and on status changes.
We use Stripe Connect to manage KYC (know your customer) identities so we don't store any of your sensitive details other than your name and email. Since we are in the process of incorporating as a 501(c)(3) nonprofit, we are only serving sellers based in the United States.
The mission of the company is to provide entrepreneurial training to people via our online platform, as well as educational materials to that aim.
I believe the old internet is still alive and well. Just harder to find now.
We were featured on our local NPR syndicate which is neat: https://laist.com/news/los-angeles-activities/new-grassroots...
Since this is hackernews, i'll add that i'm building the website and archiving system using haskell and htmx, but what is currently live is a temp static html site. https://github.com/solomon-b/kpbj.fm
It makes tricky functions like torch.gather and torch.scatter more intuitive by showing element-level relationships between inputs and outputs.
For any function, you can click elements in the result to see where they came from, or elements in the inputs to see how they contribute to the result to see exactly how it contributes to the result. I found that visually tracing tensor operations clarifies indexing, slicing, and broadcasting in ways reading that the docs can't.
You can also jump straight to WhyTorch from the PyTorch docs pages by modifying the base URL directly.
I launched a week or two back and now have the top post of all time on r/pytorch, which has been pretty fun.
I was motivated to build this as I found that many great personal finance and budget apps didn't offer integrations with the banks I used, which is understandable given the complexity and costs involved, so I wanted to tackle this problem and help build the missing open banking layer for personal finance apps, with very low costs (a few dollars a month) and a very simple api, or built-in integrations.
Still working on making this sustainable, but been quite a learning experience so far, and quite excited to see it already making a difference for so many people :)
Building desktop environment in the cloud with built in cloud storage, AI, processing, app ecosystem and much more!
It runs fully on-device, including email classification and event extraction
I'm a robotics engineer by training, this is my first public launch of a web app.
Try it: https://app.veila.ai (free tier, no email required)
- What it is:
- Anonymous AI chat via a privacy proxy (provider sees our server, not your IP or account info)
- End‑to‑end encrypted history, keys derived from password and never leave your device
- Pay‑as‑you‑go; switch models mid‑chat (OpenAI now; Claude, Gemini and others planned)
- Practical UX: sort chats into folders, Markdown, copyable code blocks, mobile‑friendly
- Notes/limits:
- Not self‑hosted: prompts go to third‑party APIs
- If you include identifying info, upstream sees it
- Prompts take a bit long sometimes, because reasoning is set to "medium" for now. Plan to make this adjustable in the future.
- Looking for feedback:
- What do you need to trust this? Open source? Independent audit?
- Gaps in the threat model I'm missing
- Which UI features and AI models you'd want next
- Any UX rough edges (esp. mobile)
- Learn more:
- Compare Veila to ChatGPT, Claude, Gemini, etc. (best viewed on desktop): https://veila.ai/docs/compare.html
- Discord: https://discord.gg/RcrbZ25ytb
- More background: https://veila.ai/about.html
Homepage: https://veila.aiHappy to answer any questions.
It's for doing realtime "human cartography", to make maps of who we are together in complex large-scale discourse (even messy protest).
https://patcon.github.io/polislike-human-cartography-prototy...
Newer video demo: https://youtu.be/C-2KfZcwVl0
It's for exploring human perspective data -- agree, disagree, pass reactions to dozens or hundreds of belief statements -- so we can read it as if it were Google Maps.
My operating assumption is that if a critical mass of us can understand culture and value clashes as mere shapes of discourse, and we can all see it together, the we can navigate them more dispassionately and with clear heads. Kinda like reading a map or watching the weather report -- islands that rise from oceans, or plate tectonics that move like currents over months, and terraform the human landscape -- maybe if we can see these things together, we'll act less out of fear of fun-house caricatures. (E.g., "Hey, dad, it seems like the peninsula you're on is becoming a land bridge toward the alt right corner. I feel a little bummed about that. How do you feel about it?")
(It builds on data and the mathematical primitives of a great tool called Pol.is, which I've worked with for almost a decade.)
Experimental prototype of animating between projections: https://main--68c53b7909ee2fb48f1979dd.chromatic.com/iframe.... (advanced)
A simple document translator that preserves your file's formatting and layout.
On-site surveys for eCommerce and SaaS. It's been an amazing ride leveling up back and forth between product, design, and marketing. Marketing is way more involved than most people on this site realize...
Besides the LLM experimentation, this project has allowed me to dive into interesting new tech stacks. I'm working in Hono on Bun, writing server-side components in JSX and then updating the UI via htmx. I'm really happy with how it's coming together so far!
[0] https://github.com/stryan/materia and/or https://primamateria.systems/
The goal was to make the learning material very malleable, so all content can be viewed through different "lenses" (e.g. made simpler, more thorough, from first principles, etc.). A bit like Wikipedia it also allows for infinite depth/rabbit holing. Each document links to other documents, which link to other documents (...).
I'm also currently in the middle of adding interactive visualizations which actually work better than expected! Some demos:
Open source pipeline to ensemble >=2 cheap data vendors into a reliable US equities securities master. If anyone wants to collaborate (in particular split the effort of acquiring & normalizing data from multiple vendors for evaluation). Boring + tedious stuff, but there are a few technically interesting sub-tasks: (1) extract reference data from various unstructured sources and link them by name; (2) detect and correct data errors without a source of ground truth. Success could unblock much more interesting projects.
It’s been a fun, practical way to continuously evaluate the latest models two ways - via coding assistance & swapping between models to power the conversational AI voice partner. I’ve been trying to add one big new feature each time the model generation updates.
The next thing I want to add is a self improving feedback loop where it uses user ratings of the calls & evaluations to refine the prompts that generate them.
Plus it has a few real customers which is sweet!
Nice to call it feature complete and move on!
I just released the changelog 5 minutes ago https://intrasti.com/changelog which I went with a directory based approach using the international date format YYYY-MM-DD so in the source code it's ./changelog/docs/YYYY/MM/DD.md - seems to do the trick and ready for pagination which I haven't implemented yet.
We're pretty jazzed.
Beyond that, just regular random stuff that comes up here and there, but, for once, my hdd with sidelined projects is slowly being worked through.
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
https://generative-ai.review/2025/09/september-2025-image-ge...
and I evaluated all the major 3D Asset creators:
https://generative-ai.review/2025/08/3d-assets-made-by-genai...
I am building a tool that gives automated qualitative feedback on websites. This is the early and embarrassing MVP: https://vibetest-seven.vercel.app/product
You provide your URL and an LLM browses your site and writes up feedback. Currently working on increasing the quality of the feedback. Trying to start with a narrower set of tests that give what I think is good feedback, then increase from there.
If a tool like this analyzed your website, what would you actually want it to tell you? What feedback would be most useful?
It’s got the base instruction set implemented and working. A CRT shader, resizable display, and swappable color palettes.
I’m working on sound and a visual debugger for it.
I have some work to do on the Haskell TigerBeetle client and the Haskell postgresql logical replication client library I wrote too.
(But also just launched https://ChessHoldEm.net this weekend)
AppGoblin is a free place to do app research for understanding which apps use which companies to monetize, track where data is sent and what kinds of ads are shown.
I want to write voip plugins using a modern tool chain and benefit from the wider crate eco system
And an agentic news digest service which scrapes a few sources (like HackerNews) for technical news and create a daily digest, which you can instruct and skew with words.
right now, it’s a better way to showcase your really specific industry skills and portfolio of 3D assets (i.e., “LinkedIn for VR/XR) with hiring layered on
starting to add onto the current perf analysis tools and think more about how to get to a “lovable for VR/XR”
This is a free license plate tracking game for families on road trips. Currently adding more OAuth providers, and some time zone features.
It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).
It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
It's an API that allows zero-knowledge proofs to be generated in a streaming fashion, meaning ZKPs that use way less RAM than normal.
The goal is to let people create ZKPs of any size on any device. ZKPs are very cool but have struggled to gain adoption due to the memory requirements. You usually need to pay for specialized hardware or massive server costs. Hoping to help fix the problem for devs
https://jsassembler.fly.dev/ https://csharpassembler.fly.dev/ https://goassembler.fly.dev/ https://rustassembler.fly.dev/ https://nodeassembler.fly.dev/ https://phpassembler.fly.dev/
The purpose is to find if can i build declarative software in multiple langauges (Rust, Go, Node.Js, PHP and Javascript) knowing only one language (C#) without understanding the implementation deeply.
Another purpose is validate AI models and their efficiency since development using AI is hard but highly productive and having a declarative rules to recreate the implementation may be used to validate models
Currently i am convinced it is possible to build, but now working on creating a solid foundation with tests of the two assembler engines, structure dumps, logging, logging outputs so that those can be used by the AI which it needs to fix issues iteratively.
Need to add more declarative rules and implement a full stack web assembler to see if AI will hit the technical debt which slows/stop progress. Only time will tell.
https://github.com/RoyalIcing/Orb
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
The goal is to provide a fully typed nodeJS framework that allows you to write a typescript function once and then decide whether to wire it up to http, websocket, queues, scheduled tasks, mcp server, cli and other interactions.
You can switch between serverless and server deployments without any refactoring / completely agnostic to whatever platform your running it on
It also provides services, permissions, auth, eventhub, advanced tree shaking, middleware, schema generation and validation and more
The way it works is by scanning your project via the typescript compiler and generating a bootstrap file that imports everything you need (hence tree shaking), and allows you to filter down your backend to only the endpoints needed (great to pluck out individual entry points for serverless). It also generates types fetch, rpc, websocket and queue client files. Types is pretty much most of what pikku is about.
Think honoJS and nestJS sort of combined together and also decided to support most server standards / not just http.
Website needs love, currently working on a release to support CLI support and full tree shaking.
https://github.com/olooney/jellyjoin
It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.
[1]: https://platform.openai.com/docs/guides/embeddings
[2]: https://en.wikipedia.org/wiki/Hungarian_algorithm
Thinking about: A new take on LinkedIn/web-of-trust, bootstrapped by in-person interactions with devices. It seems that the problem of proving who is actually human and getting a sense of how your community values you might be getting more important, and now devices have some new tools to bring that within reach.
We’re working directly with partner housing unions and charities in Britain and Ireland to build the first central database of rogue landlords and estate agents. Users can search an address and see if it’s marked as rogue/dangerous by the local union, as well as whether you can expect to see your deposit returned, maintenance, communication - etc.
After renting for close to a decade, it’s the same old problems with no accountability. We wanted to change this, and empower tenants to share their experiences freely and easily with one another.
We’re launching in November, and I’m very excited to announce our partner organisations! We know this relies on a network effect to work, and we’re hoping to run it as a social venture. I welcome any feedback.
The solution? Have the cartridge keep track of CPU parity (there's no simple way to do this with just the CPU), then check that, skip one cycle if needed... and very carefully cycle time the rest of the routine, making sure that your reads land on safe cycles, and your writes land in places that won't throw off the alignment.
But it works! It's quite reliable on every console revision I've thrown it at so far. Suuuper happy with that.
It's been a great project to understand how design depends on a consistent narrative and purpose. At first I put together elements I thought looked good but nothing seemed to "work" and it's only when I took a step back and considered what the purpose and philosophy of the design was that it started to feel cohesive and intentional.
I'll never be a designer but I often do side projects outside my wheelhouse so I can build empathy for my teammates and better speak their language.
I'm trying to use this to create stories that would be somewhat unreasonable to write otherwise. Branching stories (i.e., CYOA), multiperspective stories, some multimedia. I'm still trying to figure out the narrative structures that might work well.
LLMs can overproduce and write in different directions than is reasonable for a regular author. Though even then I'm finding branching hard to handle.
The big challenges are rhythm, pacing, following an arc. Those have been hard for LLMs all along.
Basically, think of it as "Pokemon the anime, but for real". We allow you to use your voice to talk to, command, and train your monster. You and your monster are in this sandbox-y, dynamic environment where your actions have side effects.
You can train to fight or just to mess around.
Behind the scenes, we are converting player's voice into code in real time to give life to these monsters.
If you're interested, reach out!
You can read more about it and watch a demo: https://blog.with.audio/posts/web-reader-tts
I buit this to get some traffic to my main project's website using a free tool people might like. The main project: https://desktop.with.audio -> a one time payment text to speech app with text highlighting and export mp3 and other features on MacOS (ARM only) and Windows.
It's a browser extension right now and the platform integrates with SSO providers and AI APIs, to help discover shadow AI, enforce policies and creates audit trails. Think observability for AI adoption but also Grammerly since we help coach endusers to better behavior/outcomes.
Early days but the problem is real, have a few design partners in the F500 already
So I started working on Librario, an ISBN database that fetches information from several other services, such as Hardcover.app, Google Books, and ISBNDB, merges that information, and return something more complete than using them alone. It also saves that information in the database for future lookups.
You can see an example response here[1]. Pricing information for books is missing right now because I need to finish the extractor for those, genres need some work[2], and having a 5 months old baby make development a tad slow, but the service is almost ready for a preview.
The algorithm to decide what to merge is the hardest part, in my opinion, and very basic right now. It's based on a priority and score system for now, where different extractors have different priorities, and different fields have different scores. Eventually, I wanna try doing something with machine learning instead.
I'd also like to add book summaries to the data somehow, but I haven't figured out a way to do this legally yet. For books in the public domain I could feed the entire book to an LLM and ask them to write a spoiler-free summary of the book, but for other books, that'd land me in legal trouble.
Oh, and related books, and things of the sort. But I'd like to do that based on the information stored in the database itself instead of external sources, so it's something for the future.
Last time I posted about Shelvica some people showed interest in Librario instead, so I decided to make it something I can sell instead of just a service I use in Shelvica[3], hence why I'm focusing more on it these past two weeks.
[1]: https://paste.sr.ht/~jamesponddotco/de80132b8f167f4503c31187...
[2]: In the example you'll see genres such as "English" and "Fiction In English", which is mostly noise. Also things like "Humor", "Humorous", and "Humorous Fiction" for the same book.
[3]: Which is nice, cause that way there are two possible sources of income for the project.
Funny thing is, the advisor started to tell me to sell last week, and so I did. Then last Friday happened. Interesting.
(It's a frontend to make searching eBay actually pleasant)
Take a picture of an event flyer or paste in some text. The event gets added to your calendar.
I have been trying to study Chinese on my own for a while now and found it very frustrating to spend half the time just looking for simple content to read and listen to. Apps and websites exist, but they usually only have very little content or they ramp up the difficulty too quickly.
Now that LLMs and TTS are quite good I wanted to try it out for languages learning. The goal is to create a vast number of short AI-generated stories to bridge the gap between knowing a few characters and reading real content in Chinese.
Curious to see if it is possible to automatically create stories which are comfortable to read for beginners, or if they sound too much like AI-slop.
-----
COCKTAIL-DKG - A distributed key generation protocol for FROST, based on ChillDKG (but generalized to more elliptic curve groups) -- https://github.com/C2SP/C2SP/pull/164 | https://github.com/C2SP/C2SP/issues/159
-----
A tool for threshold signing software releases that I eventually want to integrate with SigStore, etc. to help folks distribute their code-signing. https://github.com/soatok/freeon
-----
Want E2EE for Mastodon (and other ActivityPub-based software), so you can have encrypted Fediverse DMs? I've been working on the public key transparency aspect of this too.
Spec: https://github.com/fedi-e2ee/public-key-directory-specificat...
Implementation: Coming soon. The empty repository is https://github.com/fedi-e2ee/pkd-server-go but I'll be pushing code in the near future.
You can read more about this project here: https://soatok.blog/category/technology/open-source/fedivers...
This is basically a variation on bit-packing (which is NP-hard), but it's tractable if you prune the search space enough.
Since the last month, I have created a complete schematic with Circuitscript, exported the netlist to pcbnew and designed the PCB. The boards have been produced and currently waiting for them to be delivered to verify that it works. Quite excited since this will be the first design ever produced with Circuitscript as the schematic capture tool!
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
The main language goals are to be easy to write and reason, generated graphical schematics should be displayed according to how the designer wishes so (because this is also part of the design process) and to encourage code reuse.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
It can process a set of 3-hour audio files in ~20 mins.
I recorded a demo video of how it works here: https://www.youtube.com/watch?v=v0KZGyJARts&t=300s
[1] https://github.com/naveedn/audio-transcriber
I alluded to building this tool on a previous HN thread: https://news.ycombinator.com/item?id=45338694
Just added health inspection data from countries that have that in open datasets (UK and Denmark). If anyone know of others I'd be appreciative of hints.
Thinking of focusing on another idea for the rest of the year, have a rough idea for a map based ui to structure history by geofences or lat / lng points for small local museums
It's called lazyslurm - https://github.com/hill/lazyslurm
Would love feedback! <3
Fitness Tools https://aretecodex.pages.dev/tools/
Fitness Guides https://aretecodex.pages.dev/
A lot of people often ask questions like: - How do I lose body fat and build muscle? - How can I track progress over time? - How much exercise do I actually need? - What should my calorie and macro targets be?
One of the most frequently asked questions in fitness forums is about cutting, bulking, or recomposition. This tool helps you navigate those decisions: https://aretecodex.pages.dev/tools/bulk-cut-recomposition-we...
We’ve also got a Meal Planner that generates meal ideas based on your calorie intake and macro split: https://aretecodex.pages.dev/tools/meal-plan-planner
Additionally, I created a TDEE Calculator designed specifically to prevent overshooting TDEE in overweight individuals: https://aretecodex.pages.dev/tools/tdee-calculator
For a deeper dive into the concept of TDEE overshoot in overweight individuals, check out this detailed post: https://www.reddit.com/r/AskFitnessIndia/comments/1mdppx5/in...
I discovered that "least common ancestor" boils down to the intersection of 'root-path' sets, where you select the last item in the set as the 'first/least common ancestor'.
Imagine your basic Excel spreadsheet -> generating document files, but add:
- Other sources like SQL queries - User form (e.g. "Generate documents for Client Category [?]") - Chaining sources in order like SQL queries with parameters based on the user form - Split at multiple points (5 records in a csv, 4 records in a sql result = 20 generated documents) - Full Jinja2 templating with field substitution but also if/for blocks that works nicely with .docx files - PDF output - output file names using the same templating: "/BusinessDrive/{{ client_id }}/Invoice - {{ invoice_id}}.pdf"
All saved in reproducible workflows (for example if you need to process a .csv file you receive each morning)
https://github.com/westonwalker/primelit
Drawing a lot of inspiration from interval.com. It was an amazing product but was a hosted SAAS. I'm exploring taking the idea to the .NET ecosystem and also making it a Nuget package that can be installed and served through any ASP.NET project.
The idea is to enable a comment section on any webpage, right as you’re browsing. Viewing a Zillow listing? See what people are excited about with the property. Wonder what people think about a tourist attraction? It’ll be right there. Want to leave your referral or promo code on a checkout page for others? Post it.
Not sure what the business model will look like just yet. Just the kind of thing I wish existed compared to needing to venture out to a third party (traditional social media / forums etc) to see others’ thoughts on something I’m viewing online. I welcome any feedback!
(It was supposed to be completed months ago but got stuck in other issues)
Here's the waitlist and proposal: https://waitlist-tx.pages.dev
The goal is to catch vulnerabilities early in the SDLC by running agentic loop that autonomously hunt for security issues in codebases.Currently available as a CLI tool, VSCode extension.I've been actively using to scan WordPress, odoo plugins and found several privilege escalation vuln. I have documented as blog post here: https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
An agent that plugs into Slack and helps companies identify and remediate infrastructure cost-related issues.
It is a modified version of Shopify's CEO Tobi try implementation[0]. It extends his implementation with sandboxing capabilities and designed with functional core, imperative shell in mind.
I had success using it to manage multiple coding agents at once.
It's mostly where I want it to be now, but still need to automate the ingest of USPTO data. I'd really like it to show a country flag on the search results page next to each item, but inferring the brand name just from the item title would probably need some kind of natural language processing; if there's even a brand in the title.
No support for their mobile layout. Do many people buy from their phone?
The main idea is to bring as many of the agentic tools and features into a single cohesive platform as much as possible so that we can unlock more useful AI use-cases.
It currently supports complex heatmaps based on travel time (e.g. close to work + close to friends + far from police precincts), and has a browser extension to display your heatmap over popular listing sites like Zillow.
I'm thinking of making it into an API to allow websites to integrate with it directly.
Other than that, I've been doing a lot of fixing of tech debt in my home network from the last six years. I've admittedly kind of half-assed a lot of the work with my home router and my server and my NAS and I want these things to be done correctly. (In fairness to me, I didn't know what I was doing back when I started, and I'd like to think I know a fair bit better now).
For example, when I first built my server, I didn't know about ZFS datasets, so everything was on the main /tank mount. This works but there are advantages to having different settings for different parts of the RAID and as such I've been dividing stuff into datasets (which has the added advantage of "defragging" because this RAID has grown by several orders of magnitude and as a result some of the initial files were fragmented).
The challenge is how ChatGPT can understand your "query" or say "prompts"? Raw data is not good enough - so I try to use a term called "AI Understanding Score" to measure it: https://senify.ai/ai-understanding-score. I think this index will help user to build more context so that AI can know more and answer with correct result.
This is very early work without every detail considered, really would like to have your feedback and suggestions.
You can have a try with some MCP services here: https://senify.ai/mcp-services
Thanks.
So, I built it.
Using ChatGPT's voice agents to generate Github issues tagging @claude to trigger Claude Code's Github Action, I created https://voicescri.pt that allows me to have discussions with the voice agent, having it create issues, pull requests, and logical diffs of the code generated all via voice, hands free, with my phone in my pocket.
now the foundation is done, i've learnt a lot. i'm actually eating dog food by using it to track my own classical guitar practice everyday. i am pausing a while to process the requirements by ultra deep thinking to understand what would be helpful and how to shape the product.
LLMs such as codex and claude code definitely helped a lot, but I guess human beings' opinions would be more helpful - after all, the tool is made for humans instead of being used by claude code.
I would also like to hear when you start a project, if you know your audience are not super close to AI, would you still consider to enable the AI feature for them?
https://apps.apple.com/us/app/teletable-football-teletext/id...
For work, https://heyoncall.com/ as the best tool for on-call alerting, website monitoring, cron job monitoring, especially for small teams and solo founders.
I guess they both fall under the category of "how do you build reliable systems out of unreliable distributed components" :)
Features: Chat with page, fix grammar, reply to emails, messages, translate, summarize, etc.
Yes, you can use your own API KEY.
please check it out and share your feedback https://jetwriter.ai
The basic idea is that integrating business data into a B2B app or AI agent process is a pain. On one side there's web data providers (Clearbit, Apollo, ZoomInfo) then on the other, 150 year old legacy providers based on government data (D&B, Factset, Moody's, etc). You'd be surprised to learn how much manual work is still happening - teams of people just manually researching business entities all day.
At a high level, we're building out a series of composable deep research APIs. It's built on a business graph powered by integrations to global government registrars and a realtime web search index. Our government data index is 265M records so far.
We're still pretty early and working with enterprise design partners for finance and compliance use cases. Open to any thoughts or feedback.
Currently it looks like this:
- code editor directly in the browser
- writes to your local file system
- UI-specific features built into the editor
- ways to edit the CSS visually as well as using code
- integrated AI chat
But I have tons of features I want to add. Asset management, image generation, collaborative editing, etc.It's still a prototype, but I'm actively posting about it on twitter as I go. Soon, I'll probably start publishing versioned builds for people to play with: https://x.com/danielvaughn
https://github.com/whyboris/Video-Hub-App & https://videohubapp.com/
1. Fluxmail - https://fluxmail.ai
Fluxmail is an AI-powered email app that helps you get done with email faster. There are a couple of core tenets/features that it has, including:
- local-first - we don't store your emails and we make interactions as fast as possible
- unified inbox - so you can view emails from all your email addresses in one place
- AI-native - helping you draft emails, search for emails, and read through your emails faster
I'd love to hear if these features resonate with you, or if there are other features that you feel are missing from your current email app.
2. ExploreJobs.ai - https://explorejobs.ai
This is a job board for AI jobs and companies. The job market in AI is pretty hot right now, and there are a lot of cool AI companies out there. I'm hoping to connect job seekers with fast-growing AI companies.
Not earth shattering, but something that should exist.
What I'm building at the moment is a server monitoring solution for STUN, TURN, MQTT, and NTP servers. I wanted to allow the software for this to be portable. So I wrote a simple work queue myself. Python doesn't have linked-lists which is the data structure I'm using for the queues. They allow for O(1) deletes which you can't really get on many Python data structures. Important for work items when you're moving work between queues.
For the actual workers I keep things very simple. I make like 100 independent Python processes each with an event loop. This uses up a crap load of memory but the advantage is that you can parallel execution without any complexity. It would be extremely complex trying to do that with code alone and asyncio's event loop doesn't play well with parallelism. So you really only want one per process.
Result: simple, portable Python code that can easily manage monitoring hundreds of servers (sorry didnt mean for that to sound like chatgpt, lmao, incidental.) The DB for this is memory-based to avoid locking issues. I did use sqlite at first but even with optimizations there were locking issues. Now, I only use sqlite for import / export (checksums.)
Not anything special by HN standards but work is here: https://github.com/robertsdotpm/p2pd_server_monitor
I'm at the stage now where I'm adding all the servers to monitor to it. So fun times.
And currently working to make things shareable, also don't want to use database.
Here is the demo https://notecargo.huedaya.com/
The Pain Point: If you are analyzing a large YouTube channel (e.g., for language study, competitive analysis, or data modeling), you often need the subtitle files for 50, 100, or more videos. The current process is agonizing: copy-paste URL, click, download, repeat dozens of times. It's a massive time sink.
My Solution: YTVidHub is designed around bulk processing. The core feature is a clean interface where you can paste dozens of YouTube URLs at once, and the system intelligently extracts all available subtitles (including auto-generated ones) and packages them into a single, organized ZIP file for one-click download.
Target Users: Academic researchers needing data sets, content creators doing competitive keyword analysis, and language learners building large vocabulary corpora.
The architecture challenge right now is optimizing the backend queuing system for high-volume, concurrent requests to ensure we can handle large batches quickly and reliably without hitting rate limits.
It's still pre-launch, but I'd love any feedback on this specific problem space. Is this a pain point you've encountered? What's your current workaround?
Taking a break from tech to work on a luxury fashion brand with my mum. She hand paints all the designs. I it first collection is a set of silk scarves and we’re moving into skirts and jackets soon.
Been a wonderful journey to connect with my mum in this way. And also to make something physical that I can actually touch. Tech seems so…ephemeral at times
It's largely finished and functional, and I'm now focused on polish and adding additional builtin functions to expand its capabilities. I've been integrating different geometry libraries and kernels as well as writing some of my own.
I've been stress-testing it by building out different scenes from movies or little pieces of buildings on Google Maps street view - finding the sharp edges and missing pieces in the tool.
My hope is for Geotoy to be a relatively easy-to-learn tool and I've invested significantly in good docs, tutorials, and other resources. Now my goal is to ensure it's something worth using for other people.
The current challenge is the display. I’ve struggled to learn about this part more than any other. After studying DVI and LVDS, and after trying to figure out what MIPI/DSI is all about, I think parallel RGB is the path forward, so I’ve just designed a test PCB for that, and ordered it from JLCPCB’s PCBA service.
[1] https://www.robinsloan.com/notes/home-cooked-app/ [2] https://booplet.com/blog/anyone-can-cook
[1] https://nid.nogg.dev [1] https://mood.drone.nogg.dev
Also working on a youtube channel [3] for my climbing/travel videos, but the dreary state of that website has me wondering whether it's worth it, tbh. I haven't been able to change my channel name after trying for weeks. It's apparently the best place to archive edited GoPro footage at least.
After spending so much of my career dealing with APIs and building tooling for that I feel there's huge gap between what is needed and possible vs how the space generally works. There's a plethora of great tools that do one job really well, but when you want to use them the integration will kill you. When you want to get your existing system in them it takes forever. When you want to connect those tools that takes even longer.
The reality I'm seeing around myself and hearing from people we talk to is that most companies have many services in various stages of decay. Some brand new and healthy, some very old, written by people who left, acquired from different companies or in languages that were abandoned. And all of that software is still generating a lot of value for the company and to be able to leverage that value APIs are essential. But they are incredibly hard and slow to use, and the existing tools don't make it easier.
Redesigning investment holdings for wider screens and leaning on hotwired turbo frames. Thankful for once-campfire as a reference for how to structure the backend. The lazy loading attribute works great with css media queries to display more on larger viewports.
Enjoying learning modern css in general. App uses tailwind, but did experiment with just css on the homepage. Letting the design emerge organically from using it daily, prototype with tailwind, then slim it back down with plain css.
Making a photo-based calorie tracker accurate.
They’re always on. They log into real sites, click around, fill out forms, and adapt when pages change — no brittle scripts, no APIs needed. You can deploy one in minutes, host it yourself, and watch it do work like a human (but faster, cheaper, never tired).
Kind of like a “browser-use cloud,” except it’s yours — open, self-hostable, and way more capable.
I started my program in Swift and SwiftUI, although for various reasons I'm starting to look at Dart and Flutter (in part because being multiplatform would be beneficial, and in part because I am getting the distinct feeling this program is more ambitious than where SwiftUI is at currently). It isn't a direct port of Dramatica by any stretch, instead drawing on what I've learned writing my own novels, getting taught by master fiction writers, and being part of writing workshops. But no other program that I've seen uses Dramatica's neatest concepts, other than Subtxt, a web-based, AI-focused app which has recently been anointed Dramatica's official successor. (It's a neat concept, but it's very expensive compared to the original Dramatica or any other extant "fiction plotting" program. Also, there's a space for non-AI software here, I suspect: there are a lot of creatives who are adamantly opposed to it in any form whatsoever.)
The goal is to make it straightforward to design and deploy small, composable audio graphs that fit on MCUs and similar hardware. The project is in its infancy, so there’s plenty of room for experimentation and contributions.
https://explorer.monadicdna.com/
I'll be adding more features in the coming days!
The big thing I wanted to try is automatic global routing via MQTT.
Everything is globally routable. You can roam around between gateway nodes, as long as all the gateways are on the same MQTT server.
And there's a JavaScript implementation that connects directly to MQTT. So you can make a sensor, go to the web app, type the sensor's channel key, and see the data, without needing to create any accounts or activate or provision anything.
This is built with Rust, egui and SQLite3. The app has a downloader for NSE India reports. These are the daily end of day stock prices. Out of the box the app is really fast, which is expected but still surprises me. I am going to work on improving the stocks chart. I also want to add an AI assisted stocks analyst. Since all the stocks data is on the SQLite3 DB, I should be able to express my stocks screening ideas as plain text and let an LLM generate the SQL and show me in my data grid.
It was really interesting to generate it within 3 days. I had just a few places where I had to copy from app (std) log and paste into my prompt. Most of the time just describing the features was enough. Rust compiler did most of the heavy lifting. I have used a mix of Claude Code and OpenCode (with either GLM 4.5 or Grok Code Fast 1).
I have been generating full-stack web apps. I built and launched https://github.com/brainless/letsorder (https://letsorder.app/). Building full-stack web apps is basically building 2 apps (at a minimum) so desktop apps are way better it seems.
In the long-term, I plan to build and help others generated apps. I am building a vibe coding platform (https://github.com/brainless/nocodo). I have a couple early stage founders I consult for who take my guidance to generate their products (web and mobile apps + backend).
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
Attracting new monthly sponsors and people willing to buy me the occasional pizza with my crappy HTML skills.
I've been gathering up the supplies to set up a proper radio/computer repair workshop.
Basically, an agentic platform for working with rich text documents.
I’ve been building this solo since May and having so much fun with it. I created a canvas renderer and all of the word processor interactions from scratch so I can have maximum control over how things are display when it comes to features like AI suggestions and other more novel features I have planned for the future.
It helps you monitor metrics, logs, and consumer behavior in real time.
Check it out: https://klogic.io
Book a demo: https://klogic.io/request-demo/
Features:
- Message inspection from any topic — trace and analyze messages, view flow, lag, and delivery status - Anomaly detection & forecasting — predict lag spikes, throughput drops, and other unusual behaviors - Real-time dashboards for brokers, topics, partitions, and consumer groups - Track config changes across clusters and understand their impact on performance - Interactive log search with filtering by topic, partition, host, and message fields - Build custom dashboards & widgets to visualize metrics that matter to your team
What pain points do you face in monitoring Kafka, which features would you like next, and any improvements to dashboards, log search, or message inspection?
A LLM‑powered OSINT helper app that lets you build an interactive research graph. People, organizations, websites, locations, and other entities are captured as nodes, and evidence is represented as relationships between them.
Usually assistants like ChatGPT Deep Research or Perplexity are focused on fully automatic question answering, and this app lets you guide the search process interactively, while retaining knowledge in the graph.
The plan is to integrate it with multiple OSINT-related APIs, scrapers, etc
Me being naive, I thought “how hard could would it actually be to build a free e-sign tool?”
Turns out not that hard.
In about a weekend, I built a UETA and ESIGN compliant tool. And it was free. And it cost me less than $50. Unlimited free e-sign. https://useinkless.com/
I've been working on the idea for about a year now. I have put up the funds and set up the corporation. Been busy designing the menu, scouting an ideal location and finding the right front-end staff.
I've created two open-source solutions, one which uses a VM (https://github.com/webcoyote/clodpod) and another which creates a limited-user account with access to a shared directory (https://github.com/webcoyote/sandvault).
Along the way I rolled my own git-multi-hook solution (https://github.com/webcoyote/git-multi-hook) to use git hooks for shellcheck-ing, ending files with blank lines, and avoid committing things that shouldn't be in source control.
Not sure what the market is for something like this but it's something I've been thinking a lot about since stepping down as CEO of my previous company.
My goal is two-fold:
1. Help teams make better, faster decisions with all context populating a source-of-truth.
2. Help leaders stay eyes-on, and circumstantially hands-on, without slowing everything down. What I'd hope to be an effective version of "Founder Mode".
If anybody wants to play around with it, here's a link to my staging environment:
https://staging.orgtools.com/magic-share-link/5a917388cf19ed...
The amount of fine tuning we've put into the model has been incredible. Starting to rival human multi-decade professionals in custom club fitting.
Feels like this will be how all human-tool interaction fitting will go.
- Getting into RTL SDR, ordered a dongle, should be fun, want to build a grid people can plug into
- Bringing live transcripts, search and AI to wisprnote
- Moving BrowserBox to a binary release distribution channel for IP enforcement and ease of installation. Public repo will no longer be updated except for docs/version/base install script, and all dev happens in internal with binaries released to https://github.com/BrowserBox/BrowserBox. Too many "companies" (even "legit", large ones) abusing ancient forks and stealing our commercial updates without license, or violating previous permissive's conditions like AGPL source provision. Business lesson is even commercial licensed source-available eats into sales pipeline due to violators who could pay but assume false impunity and steal "freebies" "because they can." No perfect protection, but from now enforcement will ramp up, and source access is only for minimum ACV customers as add-on. So many enhancements coming down the pipe so it's gonna be many improved versions from here
- Creating an improved keyboard for iOS swipe typing, I don't like the settings or word choices in ambiguity and think it can be better
http://github.com/patched-network/vue-skuilder, docs-in-progress at https://patched.network/skuilder
I am using this stack now to build an early literacy app targeting kids aged 3-5ish at https://letterspractice.com (also pre-release state, although the email waitlist works I think!). LLM assisted edtech has a lot of promise, but I'm pretty confident I can get the unit cost for teaching someone to read down to 5 USD or less.
My daughter loves stories, and I often struggled to come up with new ones every night. I remember enjoying local folk tales and Indian mythological stories from my childhood, and I wanted her to experience that too — while also learning new things like basic science concepts and morals through stories.
So I built Dreamly and opened it up to friends and families. Parents can set up their child’s profile once - name, age, favorite shows or characters, and preferred themes (e.g. morals, history, mythology, or school concepts). After that, personalized stories are automatically delivered to their inbox every night. No more scrambling to think of stories on the spot!
You can check it out at https://antiques-id-1094885296405.us-central1.run.app/.
YouTube's algorithm is all about engagement - more video game videos, more brainrot, their algorithm doesn't care about the content as long as the kid is watching.
My system allows parents to define their children's interests (e.g., a 12-year-old who enjoys DIY engineering projects, Neil deGrasse Tyson, and drawing fantasy figures)
.. and specify how the AI should filter video candidates (e.g., excluding YouTube Shorts).
Periodically, the system prompts the child with something like
"Tell me about your favorite family vacation."
And their response to that prompt provides the system with more ideas and interests to suggest videos to them.
email me if you'd like to test jim.jones1@gmail.com
It's a sync infra product that is meant to cut down 6 months of development time, and years of maintenance of deep CRM sync for B2B SaaS.
Every Salesforce instance is a unique snowflake. I am moving that customization into configuration and building a resilient infrastructure for bi-directional sync.
We also recently launched a pretty cool abstraction on top of Salesforce CDC which is notoriously hard to work with: https://www.withampersand.com/blog/subscribe-actions-bringin...
Updated the landing page just yesterday!
Landing page + waitlist: https://dailyselftrack.com/
- Writing a book about Claude Code, not just for assisted programming, but as a general AI agent framework.
https://github.com/anthropics/claude-agent-sdk-python/commit...
Claude Code used to be a coding agent only, but it transformed into a general AI agent. I want to explore more about that in this book.
Makes it easy to search, favourite and listen to online radio streams.
I like to listen to online radio while working and none of the available web apps I could find hit the nail on the head, so decided to build my own.
Shipping pets and animals across borders is a big problem, and we are building the operating system to solve it at scale. If you are a vet (or work in the veterinary space), we would love to talk to you.
It's a few things:
- very fast Japanese->English dictionary
- hiragana / katakana / number / time reading quizzes
- vocabulary quizzes based on wordlists you define and build
- learn and practice kanji anki-style (using FSRS algo)
- the coolest feature (imo) is a "reader": upload Japanese texts (light novels, children's books, etc), then translate them to your native language to practice your reading comprehension. Select text anywhere on the page (with your cursor) to instantly do a dictionary lookup. A LLM evaluates your translation accuracy (0..100%) and suggests other possible interpretations.
I just revamped the UI look and feel the other day after implementing some other user feedback! I'm now exploring ads as a way to monetize it.
Still reducing design costs of a micro positing stage for hobbyists. I observed the driver motion was mostly synchronous and symmetric... Accordingly, given the scale only a single multiplexed piezoelectric actuator motor driver was actually needed, and cut that part of the design cost by 75%.
Still designing various test platforms to validate other key technologies. Sorry, no spoilers =3
https://www.PAGE.YOGA - Link sharing website
https://www.GamesNotToPlay.com - A couple video games
https://www.ce0.ai - CEO Replacement
https://www.CellularSoup.com - Cellular Automata
https://www.fuck.investments - putting together a fine art gallery
Essentially like yeoman back then, to bootstrap your webapp and all the necessary files more easily.
Currently I am somewhat stuck because of Go's type system, as the UI components require a specific interface for the Dataset or Data/Record entries.
For example, a Pie chart would require a map[string]number which could be a float, percentage string or an integer.
A Line chart would require a slice of map[string]number, where each slice index would represent a step in the timeline.
A table would require a slice of map[string]any where each slice index would represent a step in the culling, but the data types would require a custom rendering method or Stringifier(?) of sorts attached to the data type. So that it's possible to serialize or deserialize the properties (e.g. yes/no in the UI meaning true/false, etc).
As I want to provide UI components that can use whatever struct the developer provides, the Go way would be to use an interface. But that would imply that all data type structs on the backend side would have this type of clutter on them attached.
No idea if something like a Parser and Stringifier method definition would make more sense for the UI components here...or whether or not it's better to have something like a Render method attached per component that does all the stringifying on a per-property basis like a "func(dataset any, index int, column string) string" where the developer needs to do all the typecasting manually.
Manual typecasting like this would be pretty painful as components then cannot exist in pure HTML serialized form, which is essentially the core value proposition of my whole UI components framework.
An alternative would be offering a marshal/unmarshal API similar to how JSON does it, but that would require the reflect package which bloats up the runtime binary by several MB and wouldn't be tinygo compatible, so I heavily would wanna avoid that.
Currently looking for other libraries and best practices, as this issue is really bugging me a lot in the app I'm currently building [3] and it's a pretty annoying type system problem.
Feedback as to how it's solved in other frameworks or languages would be appreciated. Maybe there's an architectural convention I'm not aware of that could solve this.
[1] https://github.com/cookiengineer/gooey-cli
OpenRun allows defining your web app configuration in a declarative config using Starlark (which is like a subset of Python). Setting up a full GitOps workflow is just one command:
openrun sync schedule --approve --promote github.com/openrundev/openrun/examples/utils.star
This will set up a scheduled sync, which will look for new apps in the config and create them. It will also apply any config updates on existing apps and reload apps with the latest source code. After this, no further CLI operations are required, all updates are done declaratively. For containerized apps, OpenRun will directly talk to Docker/Podman to manage the container build and startup.
There are lots of tools which simplify web app deployment. Most of them use a UI driven approach or an imperative CLI approach. That makes it difficult to recreate an environment. Managing these tools when multiple people need to coordinate changes is also difficult.Any repo which has a Dockerfile can be deployed directly. For frameworks like Streamlit/Gradio/FastHTML/Shiny/Reflex/Flask/FastAPI, OpenRun supports zero-config deployments, there is no need to even have a Dockerfile. Domain based deployment is supported for all apps. Path based deployment is also supported for most frameworks, which makes DNS routing and certificate management easier.
OpenRun currently runs on a single machine with an embedded SQLite database or on multiple machines with an external Postgres database. I plan to support OpenRun as a service on top of Kubernetes, to support auto-scaling. OpenRun implements its own web server, instead of using Traefik/Nginx. That makes it possible to implement features like scaling down to zero and RBAC. The goal with OpenRun is to support declarative deployment for web apps while removing the complexity of maintaining multiple YAML config files. See https://github.com/openrundev/openrun/blob/main/examples/uti... for an example config, each app is just one or two lines of config.
OpenRun makes it easy to set up OAuth/OIDC/SAML based auth, with RBAC. See https://openrun.dev/docs/use-cases/ for a couple of use cases examples: sharing apps with family and sharing across a team. Outside of managed services, I have found it difficult to implement this type of RBAC with any other open source solution.
The idea is that a beginner should be able to wire up a personally useful agent (like a file-finder for your computer) in ten minutes by writing a simple prompt, some simple tools, and running it. Easy to plugin any kind of tracing, etc you want. Have three or four projects in prod which I'll be switching to use it just to make sure it fits all those use-cases.
But I want to be able to go from someone saying "can we build an agent to" to having the PoC done in a few minutes. Everything else I've looked at so far seems limited, or complicated, or insufficiently hackable for niche use-cases. Or, worse of all, in Python.
iOS/Mac app for learning Japanese by reading, all in one solution with optional Anki integration
I went full-time on this a couple years ago. I’m now doing a full iOS 26 redesign, just added kanji drawing, and am almost done adding a manga mode via Mokuro. I’m also preparing influencer UGC campaigns as I haven’t marketed it basically at all yet.
Recently I started executing the upstream spec tests against it, as a means to increase spec conformance. It's non-streaming, which is a non-starter for many use cases, but I'm hoping to provide a streaming API later down the road. Also, the errors interface are still very much WIP.
All that said, it's getting close to a fully-conformant one and it's been a really fun project.
P.S. I'm new to the language so any feedback is more than welcome.
It's a real life treasure hunt in the Blue Ridge Mountains with a current total prize of $31,200+ in gold coins and a growing side pot.
I modeled it off of last year's Project Skydrop (https://projectskydrop.com) which was in the Boston area.
* Shrinking search area (today, Day 5, it will be 160 miles, on Day 21 it'll be just 1 foot wide)
* 24/7 webcam trained on the jar of gold coins sitting on the forest floor just off a public hiking trail
* Premium upgrades ($10 from each upgrade goes towards the side pot) for aerial photos above the treasure and access to a private online community (and you get your daily clues earlier)
* $2 from each upgrade goes towards the goal of raising $20k for continued Hurricane Helene relief
So far the side pot is $6k and climbing.
It's been such a fun project to work on, but also a lot of work. Tons of moving parts and checking twice and three times to make sure you've scrubbed all the EXIF data, etc.
Create REST APIs for PostgreSQL databases in minutes.
- one man project (me) - been doing it well over a year now - no sponsorship, no investors, no backers, no nothing just my passion - I haven't even advertised much, this may first ir second time I'm sharing a link - On a weekdays im building a serious stuff with it - On weekends preparing a new major version with lessons learned from doing a real project with it
Not going to stop. But I migh be seeking sponsors in future, not sure how that will turn out. If not that's ok, I'm cool to be only user.
A project to implement 1000 algorithms. I have finished around 400 so far and I am now focusing on adding test cases, writing implementations in Python and C, and creating formal proofs in Lean.
It has been a fun way to dive deeper into how algorithms work and to see the differences between practical coding and formal reasoning. The long-term goal is to make it a solid reference and learning resource that covers correctness, performance, and theory in one place.
The project is still in its draft phase and will be heavily edited over the next few months and years as it grows and improves.
If anyone has thoughts on how to structure the proofs or improve the testing setup, I would love to hear ideas or feedback.
https://apps.apple.com/ch/app/diabetes-tagebuch-plus/id16622...
I'm working on a web app that creates easy-to-understand stories and explainers for the sake of language learning. You can listen in your favourite podcast app, or directly on the website with illustrations.
I'm eager to add more languages if anyone is fluent/able to help me evaluate the text-to-speech.
The use case for this is a bit niche, and better tools exist for this general problem in ORMs and so forth, but it works for a problem I have.
This weekend I’m working on making the parsing more robust. The most common friction I’ve heard is that downloading books elsewhere and importing them into the app is distracting. I’m torn between expanding it to include a peer-to-peer book exchange or turning it into an RSS feed reader.
My current prototype scans potential lookalikes for a target domain and then tracks DNS footprint over time. It's early, but functional - and makes it easier to understand if some lookalike domain is looking more "threat-y".
I've also been working on automating the processing of a parent-survey response for my kid's school using LLMs. The goal is to produce consistent summarization and statistics across multiple years and provide families with a clearer voice and helping staff and leadership at the school best understand what things have been working well (and where the school could improve).
lpviz is like Desmos, but for linear programming - I've implemented a few LP solvers in Typescript and hooked them up to a canvas so you can draw a feasible region, set an objective direction, and see how the algorithms work. And it all runs locally, in the browser!
If you go to https://lpviz.net/?demo it should show you a short tour of the features/how to use it.
It's by no means complete but I figured there may be some fellow optimization enthusiasts here who might be interested to take a look :) Super open to feedback, feature requests, comments!
For a 2-min intro to LP, I recommend https://www.youtube.com/watch?v=7ZjmPATPVzI
Formo makes analytics and attribution simple for onchain apps. You get the best of web, product, and onchain analytics on one versatile platform.
Have learned a lot about data engineering so far.
Truly very impressive.
Throwing in mine. I've been working on solo deving godot games in the last year.
Working on yet another gambling roguelike.
https://store.steampowered.com/app/3839000/Golden_Gambit
I have an artist contacted to do my real assets now.
If anyone is practiced in game balance please reach out if you want to help!
[0] https://github.com/paul-gauthier/entangled-pair-quantum-eras...
Working on a plugin for langfuse to create evals functions and dataset from ingested traces automatically, based on ad-hoc user feedback.
I regularly browse Reddit (and Hacker News) to keep up with new trends and research topics, but I keep running into the issues:
- It’s hard to find the right communities. Search and recommendation features aren’t quite there, and I don’t want to just passively scroll a feed.
- Going through all the comments takes too long. I just want to quickly grasp the main points people are making. If interested, I can dive in further.
So I started this project to help streamline that process—kind of like a “deep research” workflow for my own browsing.
It’s still early, but it’s already saving me time. If anyone knows of similar tools out there, I’d love to hear about them.
I think this project is an interesting addition as a software supply chain solution, but generating interest in the project in this early stage proves difficult.
For those interested, I maintain a spec in parallel of the development at https://github.com/asfaload/spec
Rough idea is easy to use voice mode to record data, then analyze unstructured data with AI later on.
I want to track all relevant life information, so what I'm eating, meds I'm taking, headache/nausea levels, etc.
Adding records is as easy as pressing record on my apple watch and speaking some kind of information. Uses Deepgram for voice transcription since it's the best transcription API I've found.
Will then send all information through to a LLM for analysis. It has a "chat with your data" page to ask questions and try and draw conclusions.
Main webapp is done, now working on packaging it into an iOS app so I can pull biometrics from Healthkit. Will then look into releasing it, either on github or possibly in the app store. It's admittedly mostly vibe coded, so not sure if it'll be something releasable, but we'll see...
Let me know if this would interest anyone!
Most recipes are a failure for beginners on the first try. I aim to make recipes bulletproof so anyone can pick up any recipe and it will just work.
The goal is to make the best recipe app ever. On a technical level recipes are built as graphs and assembled on demand. This makes multilanguage support easy, any recipe can use any unit imaginable, blind people could have custom recipe settings for their needs, search becomes OP, and there is also a wikipedia like database with information that links to all recipes. Because of the graphs; nutritional information, environmental impact, cost etc. can simply be calculated accurately by following linked graphs. Most recipe apps are very targeted to specific geographical regions and languages, this graph system removes a lot of barriers between countries and will also be a blessing to expats. Imagine an American in Europe that wish to use imperial units, english recipes, but with ingredients native to their new homeland. No problem, just follow a different set of nodes and the recipe is created that way for them.
The website is slightly outdated but gives a good idea of what is coming. Current goal is to do beta launch in 2026.
I'm still rebuilding OnlineOrNot's frontend to be powered by the public REST API. Uptime checks are now fully powered by a public API (still have heartbeat checks, maintenance windows, and status pages to go).
Doing this both as a means of dogfooding, and adding features to the REST API that I easily dumped into the private GraphQL API without thinking too hard. That, and after I finish the first milestone (uptime checks + heartbeat/cron job monitors), I'll be able to start building a proper terraform provider, and audit logs.
Basically at the start of the year I realised GraphQL has taken me as far as it can, and I should've gone with REST to start with.
A unified platform for product teams to announce updates, maintain a changelog, share roadmaps, provide help documentation and collect feedback with the help of AI.
My goal is to help product teams tell users about new features (so they actually use them), gather meaningful feedback (so they build the right things), share plans (so users know what's coming), and provide help (so users don't get stuck).
Doing it as an indie hacker + solo founder + lean. Started 13 days ago. Posting about my journey on Youtube every week day https://www.youtube.com/@dave_cheong
Last month:
• wrote my first NEON SIMD code
• implemented adaptive quadrature with Newton–Cotes formulas
• wrote a tiny Markov-chain text generator
• prototyped an interactive pipeline system for non-normalized relational data in Lua by abusing operator overloading
• load-tested and taste-tested primary batteries at loads exceeding those in the datasheet; numerically simulated a programmable load circuit for automating the load testing
• measured the frequency of subroutine calls and leaf subroutine calls in several programs with Valgrind
• wrote a completely unhealthy quantity of commentary on HN
New ideas I'm thinking about include backward-compatible representations of soft newlines in plain ASCII text, multitouch calculators supporting programming by demonstration, virtual machines for perfectly reproducible computations, TCES energy storage for household applications beyond climate control such as cooking and laundry, canceling the harmonic poles of recursive comb filters with zeroes in the nonrecursive combs of a Hogenauer filter, differential planetary transmissions for compact extreme reductions similar to a cycloidal drive, rapid ECM punching in aluminum foil, air levigation of grog, ultra-cheap passive solar thermal collectors, etc. Happy to go into more detail if any of these sound interesting.
We are in it for long term. Not a startup, not looking for investment. Just plain paid product (free while in beta) by a few people. We have a few active users, and are looking for more before we remove the beta label :) It's a PWA app. Currently targeted for desktops. For personal software, I think local-first makes a lot of sense.
I think app icons are an underrated artistic format, but they’ve only been used for product logos. I made 001 to explore the idea of turning them into an open-ended creative canvas. There are 99 “exhibit spaces” in the gallery, and artists can claim an exhibit to install art within. Visitors purchase limited-edition copies of pieces to display as the app’s icon, the art’s native format.
It’s a real-money marketplace too - the app makes money by taking commission of sales (Not crypto). I like economic simulation games and I think the constraints here could be interesting.
I’m currently looking for artists to exhibit in the gallery, if anyone is interested, or knows someone who may be, please let me know!
A scanner for pilots to convert handwritten flight logs to CSV files: https://apps.apple.com/us/app/flightlogscan/id6739791303
And a silly, fun, speed-based word game: https://apps.apple.com/us/app/scramble-game/id6748549424 (my record is <4 seconds lmk if you can beat it!)
Let me know what you think :D
We received data last week verifying we are effectively mineralizing CO2 at a high rate while saving our farmer $135/acre annually in liming costs.
We’ve come this far on grants. Now it’s time to fundraise so we can bankroll our PhDs whilst we secure pre-purchase offtake deals.
If you know of any impact investors or are an offtake buyer at a large company, please email me at zach@goal300.earth
There is nothing special comparing to other livechats, the goals is to offer an affordable and unlimited livechat for small projects and companies.
Started from the poor state of many Python HTTP clients and poor testing utilities there is for them. (Eg the neglected state of httpx and its all perf issues)
Lately, I've been hacking on improving its linear algebra support (as that's one of the key focuses I want - native matrix/vector types and easy math with them), which has also helped flush out a bunch of codegen bugs. When that gets tedious, I've also been working on general syntax ergonomics and fixing correctness bugs, with a view to self-hosting in the future.
Open sourcing them of course, I find that I can sketch out a basic idea with Co Pilot and it'll get 80% of the way there.
Godot is simply a joy , as long as you understand what it can do and what it can't.
It will never ever happen in my wildest dreams, but I want to make open source games full time.
I want the entire game industry to have to compete with high quality open source games and frameworks.
Assuming I ever have a chance to retire, I'll be a old man writing code for sport.
https://github.com/leogout/rasper-ducky
Duckyscript is a language for the USB rubber ducky that costs approximately 100$. A usb rubber ducky is an usb key that gets recognized as a keyboard and that starts typing text and shortcuts automatically once you plug it to anything. To specify to the key what to type, you can use duckyscript.
I'm using circuitpython. The last thing I did was to de-recursify the interpreter with a stack.
The more I'm implementing of duckyscript, the more i think that i should create my own language. Duckyscript sucks as a language...
I am currently developing a web app consisting of a spring/kotlin backend for an angular frontend that is meant to provide a UI for kubectl. It has oAuth login and allows you to store several kubernetes configs, select which one to use and makes it unnecessary to remember all the kubectl commands I can never remember.
It's what I'd like to have if I had to interact with a kubernetes cluster at work. Yes, I know there are several kubernetes UIs already, but remember, this is for 1) learning and 2) following through and completing a project at least somewhat.
I'm working on a mini-project which monitors official resources on the web and sends email notifications on time. Currently covering around 15000 inhabitants.
The first is a DNS blocker called Quietnet - https://quietnet.app. Its born out of my interest in infrastructure and I wanted to build an opininated DNS blocker that helps mom and pops be safer on the Internet. At the end of the day its just the typical Pi-hole on the Cloud but with my personal interest in providing stronger privacy for our users while keeping their families safe.
The second, is a small newsletter aggregator tool called Newsletters.love - https://newsletters.love/.
I wanted to create a way for people to start curating their own list of newsletters and then sharing them with their friends and families. The service helps to generate a private email adddress that they can use to subscribe to newsletters and then start reading those newsletters whenever they want without it getting lost in their email inbox.
I'm currently working on a passion project called Quietnet - https://quietnet.app
It looks inside each file to see what it’s about, then moves it to the right folder for you.
Everything happens on your Mac, so nothing leaves your computer. No clouds, no servers.
It already works with PDFs, ePubs, text, Markdown, and many other file types. Next I’m adding Microsoft Office and iWork support.
If you have messy folders anywhere on your Mac, Fallinorg can help.
It supports multiple LLM providers: OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, Z.AI, Moonshot AI, all with automatic failover, prompt caching, and token-efficient context management. Configuration occurs entirely through vtcode.toml, sourcing constants from vtcode-core/src/config/constants.rs and model IDs from docs/models.json to ensure reproducibility and avoid hardcoding. [0], [1], [2]
Recently I've added Agent Client Protocol (ACP) integration. VT Code is now a fully ACP agent. [3]
[0] https://github.com/vinhnx/vtcode
Live demo: https://play.tirreno.com/login (admin/tirreno)
Link: https://ohyahapp.com
Interesting challenge was designing for minimal distractions while keeping setup simple for parents. Timer-locked navigation so kids can see what's next but can't start other tasks or switch profiles. Also refactored from schedule-centric (nightmare to maintain) to task-definitions as first-class citizens, which made creating schedules way easier
React Native/Expo + Firebase. On the App Store after months of dogfooding with the family
Browser version here, if you're curious:
I am overengineering a simulation-based solution to this because I think there are scenarios based on cup shapes and environmental temperatures that allow either answer to be true. This will end up as a blog post I guess.
The main feature: you can run multiple language servers simultaneously for the same buffer.
One of the main reasons people stick with lsp-mode over Eglot has been the lack of multi-server support. Eglot is otherwise the most "emacsy" LSP client, so I'm working on filling that gap and I hope it could be merged into Emacs one day.
This is still WIP but I've been using it for a while for Python (basedpyright or pyrefly + ruff for linting) and TypeScript (ts-ls + eslint + tailwind language server).
It'll work in sessions where first everyone can suggest games, then in the second phase veto out suggestions, then vote and it'll display the games with the highest vote. You can also manage/import a list of your games and it'll show who owns what. It's geared towards video games, but will work for board games too. Hope to release it for everyone in the next weeks.
From there, users can either send funds to another wallet or spend directly using a pre-funded debit card. It’s still early, but we’re testing with a small group of users who want to receive payments faster and avoid PayPal or wire fees.
If you’re a freelancer or digital nomads interested in trying it out, you can check it out here: https://useairsend.com
I’ve been working for the past 3 years on SelfHostBlocks https://github.com/ibizaman/selfhostblocks, making self-hosting a viable and convenient alternative to the cloud for non technical people.
It is based on NixOS and provides a hand-picked groupware stack: user-facing there is Vaultwarden and Nextcloud (and a bunch more but those 2 are the most important IMO for non technical people as it covers most of one’s important data) and on the backend Authelia, LLDAP, Nginx, PostgreSQL, Prometheus, Grafana and some more. My know-how is in how to configure all this so they play nice together and to have backups, SSO, LDAP, reverse proxy, etc. integration. I’m using it daily as the house server, I’m my first customer after all. And beginning of 2025 it passed my own internal checkpoint to be shared with others and there’s a handful of technical users using it.
My goal is to work on this full time. I started a company to provide a white glove installation, configuration and maintenance of a server with SelfHostBlocks. Everything I’ll be doing will always be open source, same as the whole stack and the server is DIY and repair friendly. The continuous maintenance is provided with a subscription which includes customer support and training on the software stack as needed.
Recent focus has been on geolocation accuracy, and in particular being able to share more data about why we say a resource is in a certain place.
Lots of folks seem to be interested in this data, and there's very little out there. Most other industry players don't talk about their methodology, and those that do aren't overly honest about how X or Y strategy actually leads to a given prediction, or the realistic scale or inaccuracies of a given strategy, and so on. So this is an area I'm very interested in at the moment and I'm confident we can do better in. And it's overall a fascinating data challenge!
I have been working on it for the last two years as a side project, but starting March will be my full time job! Kind of excited and scared at the same time
- What: Sun Grid Engine–style scheduler + Docker on System-on-Module (SoM) boards for reproducible tests/benchmarks and interactive SSH sessions (remote dev).
- Who: Robotics/embedded engineers comparing SoMs and tuning models/pipelines on target platforms.
- Why: Reproducible runs, easy board access, comparable reports.
Pulled this side project off the shelf — something I started after covid, when I was working at one of the consumer robotics companies (used to be the largest back then). Got it mostly working, but never actually released. I tend to dust it off and push it along a bit whenever I’m between jobs. Like now...
Feels good to be back at it.
Slice and Share; framing, diptychs, also helps share photos on social media without cropping: https://apps.apple.com/app/slice-and-share/id6752774728
Both are free, no ads, no account required. I use them myself; I’m looking to improve them too so feedback is very welcome.
Also been doing small little prototypes with cursor/claude for a game I'd love to tinker on more.
https://prototype-actions.prefire.app/
https://prototype-fov.prefire.app/
It's quite an interesting process to vibe code game stuff where I have a vague concept of how to achieve things but no experience/muscle memory with three.js & friends.
They mostly work already, would appreciate testing from anyone who already has a larger, real-world Litestream v0.5.0 setup running.
https://fly.io/blog/litestream-revamped/#lightweight-read-re...
https://github.com/ncruces/go-sqlite3/tree/litestream/litest...
You define resources needed for activity, time per activity, dependencies between activities to complete a process.
After you input the process you want to complete, you get a schedule similar to a gantt chart.
System displays which activities should be ongoing at any moment, you click gui or call API to complete the activities.
After process is complete you get a report of delays and deviations by Takts, activities and resources.
Based on that report you can decide what improvements to make to your process.
It's basically Snapchat, but without other people.
Currently in AppStore review!
I was tired of only having 1 or 2 things per newsletter that interested me, multiplied by however many newsletters I've subscribed to. Trying to solve that.
The idea: design newsletter sections on whatever topics you want (football scores, tech news, new restaurants in your area, etc.), choose your tone and length preferences, then get a fully cited digest delivered weekly to your inbox. Completely automated after initial setup (but you can refine it anytime).
Have the architecture sorted and a pretty good dev plan, but collecting interest before I invest a ton of time into it.
If you feel this pain too, waitlist is here: https://www.conflio.app/
(Or maybe I'm just too lazy about staying informed haha)
It's a full funnel marketing attribution & insights tool with the intent of making marketing & marketing spends more transparent. We started from creating an utm tracking tool for our agency clients and currently it's a product on its own. We'll make it a platform to remove some of the limits that we have with WordPress and reach a larger audience.
Eu based.
So I built Riff Radar - it creates playlists from your followed artists' complete discography, and allows you to tailor them in multiple ways. Those playlists are my top listened to. I know, because you can also see your listening statistics (at the mercy of Spotify's API).
The playlists also get updated daily. Think of it as a better version of the daily mixes Spotify creates.
That main usecase is done. I’m now focusing on travel guides for remote workers. Goal is to help those new to a country to become as productive as they would be at home within 2-3 hours upon landing at the airport. I completed 80% of a guide to South Korea.
I started working on these guides after my friends in Tokyo commented during our last co-working session on how fast I got to our favourite spot (Tokyo Innovation Base) from Narita Airport; they thought I was already in-town.
It is a DNS service for AWS EC2 to keep the ever changing IPs when you cannot use the Elastic IP like ASG or when you don't want to install any third party clients to your instances.
It fetches the IPs regularly via AWS API and assign them to fixed subdomains.
It is pretty new :) still developing actively.
It works by specializing for the common case of read-only workloads and short, fixed-length keys/includes (int, uuid, text<=32b, numeric, money, etc - not json) and (optionally) repetitive key-values (a common case with short fixed-length keys). These kinds of indexes/tables are found in nearly every database for lookups, many-many JOIN relationships, materialized views of popular statistics, etc.
Currently, it's "starting to work" with 100% code coverage and performance that usually matches/beats btree in query speed. Due to compression, it can consume as little as 99.95% less memory (!) and associated "pressure" on cache/ram/IO. Of course, there are degenerate cases (e.g. all unique UUID, many INCLUDEs, etc) where it's about the same size. As with all indexes, performance is limited by the PostgreSQL executor's interface which is record-at-a-time with high overhead records. I'd love help coding a FDW which allows aggregates (e.g. count()) to be "pushed down" and executed in still requires returning every record instead of a single final answer. OTT help would be a FDW interface where substantial query plans could be "pushed down" e.g. COUNT().
The plan is to publish and open source this work.
I'd welcome collaborators and have lots of experience working on small teams at major companies. I'm based in NYC but remote is fine.
- must be willing to work with LLMs and not "cheat" by hand-writing code.
- Usage testing: must be comfortable with PostgreSQL and indexes. No other experience required!
- Benchmarking, must know SQL indexes and have benchmarking experience - no pgsql internals required.
- For internals work, must know C and SQL. PostgreSQL is tricky to learn but LLMs are excellent teachers!
- Scripting code is in bash, python and Makefile, but again this is all vibe coded and you can ask LLMs what it's doing.
- any environment is fine. I'm using linux/docker (multi-core x86 and arm) but would love help with Windows, native MacOS and SIMD optimization.
- I'm open to porting/moving to Rust, especially if that provides a faster path to restricted environments like AWS RDS/Aurora.
- your ideas welcome! but obviously, we'll need to divide and conquer since the LLMs are making rapid changes to the core and we'll have to deal with code conflicts.
DM to learn more (see my HN profile)
https://github.com/bobjansen/mealmcp
There is a website too so you don’t actually need to use MCP:
Very very beta. No stated mission just working with smart people on interesting ideas.
It is a simple NPM package that generates colorful avatars from input data to aid in quick visual verification.
I would like to see it adopted as a standard.
It’s a simple NPM package that produces colorful avatars from input data to aid with quick visual verification. I’d like to see it adopted as a standard.
In 2nd stage, I will mathematically establish the best course of action as an individual given the base theory.
In 3rd stage, I will explain common psychological phenomenon through the theory, things like narcissism, anxiety, self-doubt, how to forgive others, etc.
In 4th stage, I will explain how the theory is the fastest way to learn across multiple domains and anyone can become a generalist and critical thinker.
In 5th stage, I will explain how society will unfold if everyone can become generalist and critical thinker through the theory. And how this is the next big societal breakthrough like Industrial revolution.
In 6th and last stage, I will think about how to use this theory to make India the next superpower, as this theory can give us the demographic advantage.
Shared more about the algorithm here https://x.com/admiralrohan/status/1973312855114998185
## AI-Related Projects
* *[justinc8687] Migraine Tracker:* This project aims to help users track their migraines using voice input, with the goal of analyzing unstructured data with AI to find root causes. It uses Deepgram for transcription and an LLM for analysis, with a "chat with your data" feature. * *[dcheong] User Mastery:* A platform for product teams to manage updates, changelogs, roadmaps, documentation, and feedback, utilizing AI to assist. * *[jared_stewart] Survey Response Automation:* Using LLMs to automate the processing of parent survey responses for a school, aiming for consistent summarization and statistics. * *[codybontecou] Voice-Script:* A tool that allows users to discuss and generate GitHub issues, pull requests, and code diffs using ChatGPT's voice agents. * *[conditionnumber] LLM for Data Matching:* Proposes using an LLM to score and match candidates identified by a tool like "jellyjoin," reducing a large number of potential matches to a manageable set for AI analysis. * *[taherchhabra] Infinite Canvas for AI Generation:* A platform for AI image, video, audio, and 3D generation, designed to help create cohesive stories with consistent characters and locations. * *[chipotle_coyote] Story Theory Program (Spiritual Successor to Dramatica):* Aims to create a story theory and brainstorming program, drawing inspiration from Dramatica but incorporating modern concepts, and potentially using AI for some aspects. * *[rhl314] Magnetron (Whiteboard Explainers):* An AI-powered tool that generates whiteboard explainer videos from prompts or documents, using AI for design, animations, and voiceovers. * *[adamsaparudin] AI SaaS Workflow:* A project focused on enabling users to launch their own AI SaaS applications quickly, abstracting away complexities like user management and billing. * *[garbage] Dreamly.in (AI Bedtime Stories):* An automated, personalized, and localized bedtime story generator for children, using AI to create stories based on child profiles and themes. * *[nowittyusername] Metacognitive AI System:* This project focuses on creating an AI agent with multiple specialized LLMs that can reason, analyze, and communicate internally to provide more sophisticated responses to humans, rather than just acting as a simple chatbot. * *[fjulian] Veila (Privacy-First AI Chat):* A privacy-focused AI chat service that uses a proxy to prevent user profiling and offers end-to-end encrypted history, allowing users to switch models mid-chat. * *[ai-christianson] Gobii Platform (Open-Source AI Employees):* Browser-based AI agents that can log into real websites, fill out forms, and adapt to changes, functioning as "browser-use cloud" employees. * *[apf6] Dev Tools for MCP Servers:* Building libraries to help write tests for MCP (Model-Centric Programming) servers, focusing on AI-related development. * *[mfrye0] Plaid/Perplexity for Business Data:* Creating composable deep research APIs powered by a business graph and web search index to integrate business data into applications and AI agent processes. * *[vishakh82] Monadic DNA Explorer:* A tool to explore genetic traits from GWAS Catalog and user DNA data, with AI insights run locally in a TEE (Trusted Execution Environment). * *[jerrygoyal] JetWriter.ai:* A Chrome extension that uses AI to assist with tasks on any website, such as chatting with pages, fixing grammar, replying to emails, translating, and summarizing. * *[chadwittman] Eldrick.golf (AI Golf Club Fitter):* An AI-powered golf club fitting tool that aims to rival human professionals in custom club fitting. * *[jiffylabs] AI Governance and Security Platform:* A platform and browser extension to provide visibility into AI tool usage within organizations, discover shadow AI, enforce policies, and create audit trails. It also acts as a coach for end-users. * *[aantix] Alternative YouTube App for Kids:* An app that uses AI to filter YouTube videos based on parental-defined interests and prompts children for input to discover new interests, moving away from engagement-driven algorithms. * *[qwikhost] Video AI Editor:* A tool for editing videos using AI. * *[accountisha] CPA Exam Prep Tool:* A system that generates word problems and step-by-step solutions to help individuals prepare for the American CPA exams. * *[felixding] Kintoun.ai:* A simple document translator that preserves file formatting and layout, likely using AI for translation. * *[skyfantom] LLM + Stocks Market Analysis:* Experimenting with LLMs for stock market analysis and comparing different models for their effectiveness. * *[braheus] English-to-Function Definition (LLM):* A library that allows defining functions in English using an LLM, which can then be used like regular TypeScript functions, enabling agentic orchestration. * *[gametorch] AI Sprite Animator:* An AI-powered tool for animating sprites in 2D video games. * *[sab_hn] Endless Chinese:* An AI-generated story platform for learning Chinese, aiming to create a vast number of short stories for beginners. * *[asdev] FleetCode (Coding Agent Control Panel):* An open-source control panel for running coding agents in parallel. * *[trogdor] AI Document Summarization/Analysis:* A tool that uses AI to analyze documents and provide summaries, potentially for research or other forms of content consumption. * *[osint.moe] LLM-Powered OSINT Helper:* An app that uses LLMs to build an interactive research graph for Open Source Intelligence (OSINT) gathering. * *[kintoun.ai] Document Translator:* A tool that translates documents while preserving formatting and layout, likely leveraging AI. * *[mclaren] AI-powered code generation and analysis tools.* * *[skanga] Conductor (LLM-Agnostic Framework):* A framework for building sophisticated AI applications using a subagent architecture, inspired by concepts of "The Rise of Subagents." * *[ashdnazg] Palindrome Finding (CUDA):* Porting code to CUDA to find palindromes, with a focus on GPU optimization and exploring new elements in number series. * *[veesahni] AI in Customer Communications:* Exploring effective, hype-free usage of AI in customer communications. * *[cryptoz] Code+=AI (AI Webapp Builder):* A platform for building AI web apps where API calls are proxied, and users are charged for token usage, with creators earning a percentage of the revenue. The LLM is also used to modify code. * *[exasperaited] Recovering from Cognitive Impairment:* Using AI tools to help clarify thoughts and potentially recover cognitive abilities lost due to a past event. * *[waxycaps] CEO Replacement:* A project related to AI that has the goal of replacing a CEO. * *[vladoh] Simple Photo Gallery (V2):* While not AI-specific, the mention of a future SaaS offer for users who don't want to self-host suggests potential for AI-driven features in the future. * *[dheera] Invoice Generators for "Inconvenience Fees":* While not directly AI, the idea of invoicing for "inconvenience fees" could be an interesting application for AI to determine and quantify such fees. * *[yomismoaqui] HN Post/Comment Analyzer:* A website for analyzing posts and comments on Hacker News, potentially using AI to filter or summarize content. * *[ce0.ai] CEO Replacement:* A project explicitly stating it's about replacing a CEO with AI. * *[robinsloan] Home-cooked App Essay Inspiration:* While not directly an AI project, the mention of this essay and the focus on personal apps could lead to AI-integrated personal tools. * *[zuhayeer] Levels.fyi Calculator Revamp:* Focusing on improving a calculator page for refreshers and stock growth, which could involve AI for analysis or predictions. * *[lukehan] AI Data Enrichment Platform:* A platform to help users enrich their data so AI, like ChatGPT, can understand it better, measured by an "AI Understanding Score." * *[asimovDev] Sound Blaster Command Control:* While primarily reverse engineering, the mention of "creative's multiplatform solutions" could imply future AI integration for smarter control. * *[daveevad] "Myself, myself needs work":* This self-reflection could involve AI tools for personal development or understanding oneself better. * *[thenipper] Campaign Management App for TTRPGs:* While primarily a wiki-like app, the potential for AI to assist in game mastering or content generation is present. * *
Check out my project and my short film at https://cinesignal.com/p/call154
Most sites fall into extremes: Dribbble leans toward polished mockups that never shipped, while Awwwards and Mobbin go heavy on curation. The problem isn’t just what they pick — it’s that you only ever see a narrow slice. High curation means low volume, slow updates, and a bias toward showcase projects instead of the everyday, functional interfaces most of us actually design.
Font of Web takes a different approach. It’s closer to Pinterest, but purely for web design. Every “pin” comes with metadata: fonts, colors, and the exact domain it came from, so you can search, filter, and sort in ways you can’t elsewhere. The text search is powered by multimodal embeddings, so you can use search queries like “minimalist pricing page with illustrations at the side” and get live matches from real websites.
What you can do:
natural language search (e.g. “elegant serif blog with sage green”)
font search (single fonts, pairings, or 2+ combos, e.g https://fontofweb.com/search/pins?family_id=109 , https://fontofweb.com/search/pins?family_id=135 )
color search/sorting (done in perceptual CIELAB space not RGB)
domain search (filter by site, e.g. https://fontofweb.com/search/pins?domain=apple.com, https://fontofweb.com/search/pins?domain=blender.org )
live website analysis (via extension — snip any part of a page and see fonts/colors instantly, works offline)
one-click font downloads
palette extraction (copy hex codes straight to clipboard)
private design collections
Appreciate feedback into the ux/ui, feature set and general usefulness in your own workflow
He liked what I built for him and I got jealous, so I expanded it with my own profile (Trail running).
Then, I got curious… Could I build a full web platform for people to track their sporting life? I mean we have LinkedIn and CVs for our job career, why not celebrate all our sports/training efforts as well.
After a couple of months on the side, I'm pretty happy with Flexbase. If you're into sports, give it a try and let me know what's missing for you.
Note: it's mobile-only past the front page.
https://flexbase.co/ My profile: https://flexbase.co/athletes/96735493
You can list the sports you're doing or did in your entire life, you can add your PRs, training routines, gear, competition results, photos. You can also list your clubs, and invite/follow your training buddies.
Honestly, I'm not sure where (or if) to expand it... Turn it into a Club-centric tool, make it more into a social network for sporty people.
Lots of ideas, but I'd love to find someone to work on it with me. I find that building alone is less fun.
Thanks for your sporty feedback.
I noticed a gap - our customers are required to upload sensitive documents but often hesitate at the thought of uploading documents in the intercom/crisp interface, citing privacy concerns.
I thought, how difficult would it be to build an app that sends documents to your own Google drive - turns out it’s very easy. In a week, we built an app that renders an iframe in the intercom chat interface and sends documents straight to our google drive folder, bypassing intercom all together.
We’re now investigating uploading to s3 or azure blob storage and generating summaries of documents that are sent to the intercom conversation thread so ops teams can triage quicker.
Let me know what you think!
That’s why I’ve been building 'Fragno', a framework for creating full-stack libraries. It allows library authors to define backend routes and provides reactive primitives for building frontend logic around those routes. All of this integrates seamlessly into the user’s application.
With this approach, providers like Stripe can greatly improve the developer experience and integration speed for their users.
Recently I've managed to port the game onto a real-world cyberdeck, the uConsole. [1]
[0] https://store.steampowered.com/app/3627290/Botnet_of_Ares/
In parallel, I'm trying to figure out how to train a LLM for SAST.
Interpret your bloodwork for free with a precision of a longevity clinic. You can calculate your biological age based on the best bioage calculators.
The idea is to eventually add more categories like “restaurants,” “theaters,” “roads,” etc., so you can play based on local themes.
I’d love to hear your thoughts - any feedback on what you’d like to see, what feels off, or any issues you run into would be super helpful.
Right now it connects to local and remote databases like SQLite and Postgres, lets you browse schemas and tables instantly, edit data inline, and create or modify tables visually. You can save and run queries, generate SQL using AI, and import or export data as CSV or JSON. There’s also a fully offline local mode that works great for prototyping and development.
One of the more unique aspects is that DB Pro lets you download and run a local LLM for AI-assisted querying, so nothing ever leaves your machine. You can also plug in your own cloud API key if you prefer. The idea is to make AI genuinely useful in a database context — helping you explore data and write queries safely, not replacing you.
The next big feature is a Visual Query Builder with JOIN support that keeps the Visual, SQL, and AI modes in sync. After that, I’m working on dashboards, workflow automation, and team collaboration — things like running scripts when data changes or sharing queries across a workspace.
The goal is to make DB Pro the most intuitive way to explore, query, and manage data — without the usual enterprise clutter. It’s still early, but it’s already feeling like the tool I always wanted to exist.
You can see it here: https://dbpro.app
Would love to hear feedback, especially from people who spend a lot of time in database clients — what’s still missing or frustrating in the current landscape?
There are few similar projects too, one is itself a startup which sadly on the verge of bankruptcy, and another aggregates only IT-related jobs.
I have been working on a one week side-project that ended up taking over a year… Working on it periodically with friends to add new features and patch bugs, at the moment I'm trying to expand the file sharing capabilities. It's been a journey and I have learnt quite a lot.
The aim of this is to be a simple platform to share content with others. Appreciate any feedback, this is my first time building a user facing platform. If the free tier is limiting, I've made a coupon "HELLOWORLD" if you want to stress test or try the bigger plans, it gives you 100% off for 3 months.
Our approach is to make the complexity more readable by using three simple block types to represent logic, data, and UI, which are connected by cables – a bit like wiring up components on an electronics breadboard –.
Instead of spitting out a wall of code, the AI generates these visual blocks and makes the right connections between them. The ultimate goal is to make the output from LLM more accessible and actionable for everyone, not just developers.
So I'm trying to define a multiplication operation using primitive roots.
[0] https://leetarxiv.substack.com/p/if-youre-smart-why-are-you-...
[1] (The other time the US gov put a backdoor in an elliptic curve) https://leetarxiv.substack.com/p/dual-ec-backdoor-coding-gui...
It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.
Kind of have been wasting time with Cloudflare workers engine. Trying to build a system that schedules these workers for a lightweight alternative to GitHub actions. If you are interested in WASM feel free to reach out. Looking to connect with other developers working on the WASM space.
Obviously this is quite sensitive data so architected it to never store raw data, allow for bring-your-own-key, and even in team settings be fully private by default, everybody keeps control of all their results.
Started about six months ago, have some first users, and always looking for feedback!
Done with Godot in just 7-8 months, it's fun how fast you can create things when you really focus on something :)
It’s fast, free, keyboard-only, cross-platform, and add-free. It’s been my only source of music for the past 6 months or so.
I’m not sharing the link because of music copyright issues. But I think more people should do that, to break free of the yoke of greedy music platforms.
1. is something that can poll a bunch of websites workshop/events pages to see if theres any new events [my mother] wants to go to and send a digest to her email
2. is a poller to look up the different safeway/coop/save on flyers and so on to see whats on sale between the different places, then send a mail with some recipes it found based on those ingredients
Im most of the way through 1, but havent started on 2 yet.