What the Enterprise wants
When it comes to having their finger on the pulse of what online consumers want, Amazon.com arguably knows more than any other company. That data-driven and qualitative knowledge carries into the B2B world via Amazon Web Services (AWS). So when CEO Andy Jassy finally introduced the AWS Blockchain portfolio last year based on that highly respected market research, his notable conclusion was that Enterprises primarily wanted a “simple, immutable verifiable ledger” from Blockchain technology. Translated into business speak, that’s “integrity, without the complexity” which justified the related Quantum Ledger Database (QLDB) pre-announcement.
Nine months later, QLDB is generally available and verifiable data integrity is about to go mainstream! AWS has done a great job defining an immutable verifiable ledger database as two key components:
The data state & index
An associated journal
While the first version of QLDB uses a document-style JSON store for #1 above, this definition is extensible to other database and storage instance types (relational, in-memory, document, object, key-value, time series, graph, and more). The Chainkit service elegantly slots into this definition via #2 above - as the journal for any database or storage, in the cloud and on-prem.
While Google’s BigTable may give it a run for its money, the most popular data store in the cloud (and on the planet?) today is AWS’ Simple Storage Service S3. Naturally, a verifiable ledger feature for S3 is a top AWS customer request. As both a data-agnostic and blockchain-agnostic journal, chainkit has offered this capability since the Q4 2018 pre-announcement of QLDB. Chainkit is not only ideal for connecting all existing AWS services to QLDB for their highlighted use-cases, but also for offering customers decentralized authority ledger options simultaneously alongside centralized ledgers such as QLDB. Verifiable ledger types, cost, performance and environmental impact are no longer mutually exclusive with chainkit!
Secure Analytics and AI
I recently blogged on attesting to absolute integrity or tamper-evidence (there is no middle ground) in Security Analytics as a timely use-case for verifiable ledgers. Deepfake AI abuse such as social engineering attacks to maliciously impersonate corporate executives authorizing fraud is a bold new front in cyber crime. Consequently, I predict machine learning pipelines generating knowledge models (such as multi-layered neural networks) will mandate Explainable AI via absolute integrity or tamper-evidence (data poisoning) more quickly than mandates in the traditional analytics world generating outputs from Tableau dashboards.
Agree or disagree? Let’s continue the discussion on Twitter @valb00.