HACKER Q&A
📣 joshguggenheim

How do you keep hardware requirements "verified" throughout development?


I’m building Seigo (https://seigo.ai), a continuous alignment tool for hardware system requirements, components, and tests.

After working in HW systems development (seed → public), I’ve repeatedly hit the same failure mode: at any moment, it’s hard to answer “does the current build/config actually satisfy the deliverable requirements?”

------------ The pattern: ------------

Requirements get written, then R&D moves fast (design iterations, part swaps, supplier changes)

During component selection, datasheets are selectively reviewed to address top-of-mind issues — not evaluated line-by-line against every requirement

Tests get created/executed/re-run, but the “proof” ends up scattered across datasheets/PDFs, tickets, logs, scripts, and lab notes

When something changes, there’s rarely a clean way to know what’s now invalidated, what needs re-review / re-test, and what’s actually ready at a program level

Re-running a test often feels like starting over because prior setup/conditions/results aren’t captured in a repeatable, traceable way

-------------- The questions: --------------

What tools/methods do you use to define requirements and track system readiness during development?

What was the biggest design oversight you made? When did you realize? How early could you have recognized/addressed?

When a requirement changes or a part is substituted, how do you decide what must be re-run / re-reviewed?

What artifacts count as gate-quality evidence for you, and how do you tie them to an exact build/config + requirement intent?

Is this a solvable workflow/tooling problem, or mostly an unavoidable HW tax?


  👤 Jtsummers Accepted Answer ✓
Caveat: I've not used this for your particular problem, but there is a category of software tools that aims to solve this in general.

Requirements management systems. DOORS [0] is one I had extensive experience with at one point in my career. I'm not specifically endorsing it, there are more examples here [1], but I will write about it from my perspective having used DOORS. They particularly addressed this from your question:

> When something changes, there’s rarely a clean way to know what’s now invalidated, what needs re-review / re-test, and what’s actually ready at a program level

In our context it was being used for safety critical systems, I used it for the software side but they also used it for hardware. You created multiple documents (in our case requirements, spec, design, test procedures). Each one was really a table in a DB that could be pretty printed as nice looking PDFs. Each entry was something like (probably entirely wrong format, but this is the kind of information presented):

  R1234 Fire suppression system must automatically discharge after power failure. [Linked from: S123, ...]

  S0123 30 seconds after detecting a power failure [mechanism described elsewhere], the fire suppression system will automatically discharge. [Ref: R1234, Linked from: D0983, D1234, ...]

  D0983 Whatever text describes this part of the spec, maybe this is where detecting power failure is described [Ref: S0123, ...]

  T0013 Procedure which triggers power failure and observes correct behavior (for a passing test) [Ref: D0983, ...]
DOORS would fill in all the transitive references. So the test may not have to directly refer to the requirement, or perhaps it only refers to the requirement and not the design document. But you could query the system and see which entries link in some fashion to T0013. Now when someone comes in and edits S0123 to change it to a 60 second delay, everything linked from S0123 can be marked as needing review. That includes the requirement (which was written more generically in this case) and down to the test procedures meant to verify the requirement or spec or design entry. The test itself is actually many entries (each step was an entry the way we did it) and so that change might only invalidate 2-3 procedure steps so it helps narrow down what needs to be reviewed.

It also could be used to generate reports of what had or had not been connected to a lower level item. If you have that requirement above and it doesn't trace to a spec, design, and test entry then you have a problem. If it's implemented you forgot to describe that fact and demonstrate it, or it was never implemented and no test shows that it's happening one way or the other.

The downside to DOORS is that (at least when I was using it or the way we used it) it couldn't really encode logic beyond that linking. There are other systems like Event-B which can be used to help with formalizing a design and ensuring that the requirements are logically consistent. DOORS did not stop you from having two entries that gave contradictory specifications on its own.

[0] https://en.wikipedia.org/wiki/DOORS

[1] https://en.wikipedia.org/wiki/Requirements_engineering_tools...