Architecture and infrastructure
- .auDo started as a concept in February 2026 as an
.aunamespace R&D observatory. - Live collection began 17 March 2026.
- The stack is deliberately lean and, frankly, a bit scrappy in places by design.
I wanted something I could build, inspect, test, and change quickly without pretending this needed enterprise-grade machinery before it had earned it.
| Component | Current approach/th> |
|---|---|
| Public site | a hand-built static site using plain HTML and CSS, with no CMS and no front-end framework |
| Source control | GitHub |
| Automation and scheduling | GitHub Actions, using both scheduled runs and manual dispatch |
| Main workflow | .github/workflows/collector.yml |
| Data store | Supabase Postgres |
| Data model | snapshots of observed domain state, with events derived from changes between runs |
| Secrets handling | GitHub Actions Secrets |
| Monitoring | UptimeRobot for the public site, with workflow heartbeat monitoring now being added for the collector |
| Seed approach | a deliberately selected seed set rather than wide namespace coverage |
| Provider detection | rule-based inference using nameserver and infrastructure markers rather than anything especially clever |
| Validation approach | repeated runs against the same seed, manual checks of outputs, and comparison of snapshots to separate real change from noise |
| Current external constraint | RDAP in the .au namespace is still not rich or dependable enough to support the kind of registration visibility I would ideally want |
Design choices and trade-offs
There is no big platform here. No Kubernetes, no queueing layer, no observability suite, no orchestration stack, and no polished analyst dashboard. It is a small collector pipeline, a database, a static public presence, and enough automation to start learning from real data.
Why GitHub Actions
GitHub Actions gives me both scheduled runs and manual dispatch. Scheduled runs show whether the observatory behaves properly over time. Manual runs make it easier to test assumptions, inspect outputs, and validate changes without waiting for the clock.
Why Supabase
Supabase is doing the job I need it to do right now: store the observed state of domains over time and support a simple event model on top of that. The structure is intentionally straightforward. I would rather have clean snapshots first and derive better interpretation later.
Why seed-first
I am not trying to crawl the whole .au namespace or pretend broad coverage. A smaller seed is more useful at this stage because it creates a controlled test bed for comparison logic, provider detection, event quality, and edge cases.
Why the model is simple
The data model is snapshots first, events second. That keeps the observatory grounded in what was actually observed, rather than over-engineering interpretation too early and ending up with abstractions that are harder to trust.
Still rough
- Provider inference is still rule-based and sometimes messy.
- Some providers are obvious; others are ambiguous, custom, or annoying.
- The event model is still being tightened to reduce noise.
Testing has already changed the build
The first live runs were useful because they exposed noisy events and weak assumptions. A run can complete successfully and still produce rubbish. For an observatory, that is a failure.
Current limitation
RDAP in .au is still not rich or dependable enough to give the registration visibility I would ideally want. That means parts of the interpretation layer need to stay cautious.
Current priority
The goal now is not to make .auDo bigger. It is to make it sharper: better validation, cleaner events, stronger attribution, and more reliable monitoring around the collector itself.