The GRAX application is built for longterm, near-infinite retention of your Salesforce object data. To facilitate fast, reliable, and scalable storage of this data, GRAX uses AWS S3 and recommends comparable options across Azure and GCP. To understand more about how GRAX utilizes this storage, storage growth expectations, why we don't recommend cross-cloud utilization, and supported lifecycle processes, see the sections below.
- AWS S3
- Azure Blob Storage
- GCP Cloud Storage
On AWS, GRAX only supports the standard storage class. GRAX will not work with Intelligent Tiering, Glacier, or Outposts.
On Azure, GRAX only supports the standard (GPv2) storage account tier. Premium storage accounts or containers will not work with GRAX.
On GCP, GRAX only supports standard storage. Nearline, Coldline, or Archive storage plans/containers will not work with GRAX.
Direct access, modification, or removal of data from the GRAX bucket is not supported. Attempts to rename, remove, or modify blobs within the storage bucket will cause data loss and GRAX availability issues. GRAX is not responsible for partial or complete loss of your backup dataset in the event that this restriction is violated.
In conjunction with the above, we also do not support the following:
- Lifecycle rules triggering blob deletion
- Lifecycle rules moving blobs to alternative storage tiers/classes
- Restoration via blob versioning
For targeted record deletion (like GDPR compliance), see our related documentation.
GRAX's primary storage layer consists of compressed blobs in a proprietary format. Based on leading big-data storage practices, our storage layer provides write-optimized, scalable performance profiles useful for data backup.
To facilitate this write-optimized performance, GRAX initially writes data with minimal compression and deduplication. That data is then processed asynchronously to compress, deduplicate, and sort the contained records over the hours following initial backup. This is called compaction. This process is transparent to the GRAX user, as access to the data is not limited during this process.
The process described above produces immutable storage blobs. As data is compacted past the original written state, new blobs are written to represent that data and the source blobs are marked for deletion. 14 days after being marked as such, those non-compacted blobs are deleted from storage as the data within them is represented elsewhere in "better" blobs. This process repeats as your dataset grows, meaning that compaction is a permanent and recurring background process that maintains your dataset.
Also for consideration, the vast majority of load on the GRAX application will occur in the first few weeks of operation in which you connect GRAX and we capture a snapshot of the entire exposed Salesforce dataset. This means that data can build up temporarily until compaction and then deletion catches up.
With the above process in consideration, we can plot the expected data storage utilization on a generalized curve (specific values, units, and timeframes depend on your specific environment):
For information on how to connect your GRAX app to a bucket for the first time (or change your existing storage connection), review our related documentation.
Updated 22 days ago