I have a small database box sitting in company A with a small amount of storage (i.e. 10GB).
In my company B, which has only about 99.00% uptime with company A (which has a slow internet connection), I have virtually unlimited storage and computing power.
I was wondering if anyone has ever heard of a database architecture that allows company A to host their local data until their database becomes too large, and then offloads rarely used rows to company B. Preferably, at that point, most common queries for recent data wouldn't need to hit company B.
It would also be nice if company B retained a non-ACID backup of company A, so that if company A has a hard drive failure, at most no more than a few hours of data might be lost.
What is this architecture called? Are there any open source solutions that support it?
However the main issue here is not the "ownership" (i.e. company A vs B) but the network access (i.e. network of company A vs B) and liabilities (who has to manage what).
In the previous example, although the DB runs directly on-top of AWS (company A), and the code runs on-top of Heroku (company B) which in turn runs on-top of AWS (again company A), most likely all these VM / servers are in the same data-center, and any queries don't pass through the internet.
So in the end, if you do want to implement that setup, you have to be extremely careful to the network level, and perhaps employ VPN's and firewalls.