Vol. XV / Issue 06

The McKinnie Dispatch

Filed from object storage

Storage migration ArchibotChat 2026

The upstream signal I caught late

I missed the MinIO turn.

MinIO's community edition had been heading toward maintenance mode for months. I noticed when Reddit made it embarrassing to miss.

On April 25, 2026, the MinIO repository was archived on GitHub. I did not catch it when it happened. I caught it a couple of days later when a Reddit thread surfaced it. The original poster had just been researching self-hosted S3 storage, ran into the archived repo, and asked whether MinIO was still worth adopting for a homelab. The replies were a fast tour of alternatives: Garage, SeaweedFS, RustFS, Ceph, Apache Ozone. That was useful. What I needed to decide was what to do about ArchibotChat.

Why this mattered for ArchibotChat specifically.

ArchibotChat uses object storage for artifact handling: things uploaded during a session, results the backend writes and the frontend retrieves, transient files the application backend manages on behalf of users. That is not a trivial path. It is close enough to product surface that if users depend on it, a storage backend swap becomes a migration story with customer impact.

The system had been using a shared MinIO path. That was fine when it was internal plumbing. It was less fine when I looked at the MinIO picture directly.

The pattern had been building for a while. ItsFOSS documented the full arc: community UI and feature removal in May 2025, the end of Docker images and prebuilt binaries in October 2025, a maintenance-mode message in December 2025, and then archive events in February and April 2026. The current MinIO README is explicit: the community edition is source-only now. Precompiled binaries are no longer provided. Legacy releases exist for reference but are no longer maintained. Enterprise and SLA workloads are pointed toward AIStor.

None of that is wrong for MinIO as a business. But it changes the calculus for anyone building on the community edition path, especially in a product that has not launched yet and where I can still choose what the dependency looks like.

Picking a replacement without pretending it was obvious.

I read an Elestio comparison of RustFS, SeaweedFS, and Garage published in April 2026. The framing was useful:

RustFS is the closest thing to a MinIO-style replacement in terms of positioning. Apache 2.0 license, Rust, S3-compatible, high-performance marketing. The Elestio piece called it immature for production. Looking at the feature table on the GitHub repo, some distributed and lifecycle features were still marked as under testing. Interesting, but not the conservative pick for a product path I was about to prove out.

Garage is built for geo-distributed and self-hosted deployments outside traditional datacenters. Good design, active project. The AGPL licensing and the multi-node topology assumptions were not a fit for the immediate use case, which was single-cluster artifact storage for ArchibotChat.

SeaweedFS is the production workhorse framing in that comparison, and it had something concrete backing it up: as of KFP 2.15, Kubeflow Pipelines changed its default object store from MinIO to SeaweedFS. That is not a random novelty choice. That is a project making a production-default decision about what to depend on. SeaweedFS describes itself as handling billions of files with O(1) disk access, S3-compatible, supporting file systems and Iceberg tables. The maturity signal was there. I picked it.

The prelaunch decision.

ArchibotChat had not launched yet. That changed the entire risk calculation.

Exhibit A: The decision tree If customers were already using artifact storage, the answer is a migration plan with compatibility preserved. When nothing is live yet, the answer is simpler.
if product_live:
  write migration plan
  preserve compatibility
  move customers deliberately
else:
  delete coupling
  replace backend
  prove the product path

If customers had already been using this product path, the right answer would have been a careful migration: preserve the S3-compatible interface, move credentials, validate the new backend, cut over deliberately. But there were no customers on this path yet. The right answer was a deletion plan. Remove the product-specific MinIO coupling before it became something customers could feel.

The shared platform MinIO still existed for other services. This was specifically about ArchibotChat artifact storage moving off that shared path and onto dedicated storage before launch, rather than after.

What GPT-5.5 did in about 34 minutes.

The actual migration happened in a single GPT-5.5 Codex session on April 26, 2026. The first migration instruction landed at 10:20 UTC. The sharper instruction, "we haven't gone live yet so just rip out minio stuff," landed at 10:29 UTC. The migration done summary landed at 11:03 UTC. Artifact write and read smokes were passing by 11:01 UTC.

So the practical migration took about 34 minutes from the sharper instruction, or about 43 minutes from the first one. Including follow-up hardening through 11:28 UTC, the total from the first instruction was about 68 minutes.

What GPT-5.5 actually did was hold the whole thing at once: the source repo changes, the GitOps changes, the build and push steps, the Flux reconciliation, and the verification steps. The session had to touch the backend config, the backend server code, the frontend and docs copy, the GitOps manifests for the prelaunch environments, SeaweedFS HelmRelease wiring, runtime storage configuration, and the ArchibotChat-specific MinIO references that needed to go away.

GPT-5.5 did not make object storage easy. It made the amount of coordination I could hold in one session feel different. The model was capable of staying oriented across many moving parts: switching between repo context and GitOps context, running the right validation steps after each change, and not losing track of which environments had been reconciled.

The token story was smaller than it sounds if you only look at the cumulative counter. Most of the session was cached context. The useful billable shape was closer to the output and reasoning work needed to coordinate the change, and my rough read was about $20 for the whole task. For an agent to change code, change deployment source, build images, reconcile the environments, and prove the product path, that is not free, but it is low enough to change the maintenance math.

Why the S3-compatible boundary paid rent.

The thing that made this fast was that ArchibotChat was already using a generic S3-compatible object storage interface, not MinIO-specific APIs. The application called upload, download, delete, and health through that interface. Swapping the backend meant changing what sat behind that interface, not rewriting how the application talked to storage.

Exhibit B: The contract that survived The application did not know it was talking to MinIO. It knew it was talking to S3-compatible object storage through the backend layer. That is the boundary that kept the swap contained.
artifact_storage:
  provider: s3-compatible
  caller: application-backend
  contract:
    - upload
    - download
    - delete
    - health

If the code had been calling MinIO-specific endpoints, or relying on MinIO admin APIs, or using MinIO-specific health checks, this migration would have been significantly more involved. The adapter boundary was not foresight about MinIO specifically. It was just the normal discipline of not welding application code to a specific infrastructure implementation. That discipline showed its value here.

What made the change real.

A storage migration that only moves config files is not done. What made this one real was running the product path end-to-end.

Backend health reported object_store_provider: seaweedfs and configured: true in both environments after Flux reconciled. The frontend and docs stopped saying MinIO and started saying artifact storage or object store. GitOps stopped reconciling an ArchibotChat-specific MinIO bucket and user. SeaweedFS HelmRelease objects were wired. Runtime storage configuration reconciled.

Then: artifact upload through the backend route, download through the same path, delete through the same path. Smoke in both environments. Both passed.

That is the most important smoke test. Not "does the pod run." Not "does health return 200." It is "can the product upload an artifact, read it back, and delete it through the same path customers will use." Until that passes in both environments, the migration is not done.

Follow-up hardening the same morning added synthetic artifact-storage checks to the scheduled health CronJobs in both environments. It also added daily SeaweedFS PVC snapshot CronJobs using the DigitalOcean block storage snapshot class. Both ran successfully and were confirmed ready before the session closed.

The actual lesson.

I did not catch the MinIO turn when it happened. The signals were there: the December 2025 maintenance-mode message landed in self-hosted and Kubernetes community threads months before the archive. I was working on other things. The system still worked. I did not look.

Upstream project risk is not avoidable. Something you depend on will change its license, change its support model, change its distribution terms, or go unmaintained. That is not a planning failure. It is a background condition of building on open-source software.

What is controllable is how welded your product is to any specific implementation. ArchibotChat was not welded to MinIO. The backend layer owned the storage interface. The app called a contract. When the backend changed, nothing in the product path broke. 34 minutes from "rip out the coupling" to "smoke is passing" is a reasonable outcome for a storage migration in a prelaunch product.

If this had been post-launch with real customers depending on the MinIO path, the answer would have been slower and more careful. The combination of launch timing, abstraction at the right layer, and validation through the actual product path is what made the fast version possible. None of those three alone would have been enough.

Upstream risk is less scary when your product is not welded to one implementation.