Storing encrypted key to clusters
Interacting with Ternoa SGX Enclaves
This section is the last step to store and secure private content on the Ternoa chain. Again, it is strongly recommended to read the first two sections, to familiarize yourself with the key concepts and steps:
Now the assets are prepared and content is uploaded on IPFS. We can handle the last two steps of the process: Creating the NFT with encrypted content and storing the private key on our TEE cluster.
Creating the Secret NFT or the Capsule NFT
Depending on the use-case you are in, creating a Secret or a Capsule NFT can be done in different ways using the blockchain extinsics.
To maintain consistency with the first steps, we will continue by building a Secret NFT, retrieving the keyring directly from the SEED.
We are about to add a couple of pieces of code to our encryptAndStoreContent()
function implemented in Generate keys & Encrypt:
Set and upload content for the public part of the NFT.
Submit the extrinsic with the off-chain & secret off-chain metadata stored on IPFS.
Copy and paste the following code after the log of the secretOffchainDataHash in the encryptAndStoreContent function. Don't forget to add imports and update the SEED argument in the getKeyringFromSeed()
function.
Send the private key to the SGX Enclaves
Now that the Secret NFT is created, and its content is encrypted and stored, the last step consists of securing the private key. We want our private key to be split into five Shamir shares, submitted, and stored in each of the five enclaves of an SGX cluster.
No worries, we've got you covered again with the most user-friendly helper ever: prepareAndStoreKeyShares()
To get a deep understanding of how these helpers work, we invite you to look a the Ternoa JS SDK code here.
This helper, behind the scenes, does the following tasks: it creates a temporary and derived account based on the transaction signer, eliminating the need to sign multiple key share submissions. It generates the Shamir shares from the private key, organizes the shares into a formatted payload, and then uploads them to the TEE Cluster enclaves. First, let's examine the code in detail before we delve into the upload process.
Once again copy and paste the following code after the secretNftEvent
response implemented earlier.
About the prepareAndStoreKeyShares()
arguments:
privateKey: The private key to be split with the Shamir algorithm.
signer: Provide the account owner with the private key to split either with the keyring or with the account address only (string).
nftId: The Capsule NFT id or Secret NFT id to link to the private key.
kind: The kind of nft linked to the key to upload: "secret" or "capsule".
extensionInjector: (Optional) If the signer is retrieved from an extension to sign the transaction with a wallet you will need to provide the injector. We recommend Polkadot extension: the object must have a key named "signer". In case your transaction is signed using the SEED to create your keyring as we did in our example, you can set this one as undefined.
clusterId: (Optional)The TEE Cluster id retrieved with
getFirstPublicClusterAvailable()
.
About the Shamir Shares Upload on SGX machines
Advanced concept: The prepareAndStoreKeyShares()
helper relies on the teeKeySharesStore()
function to upload the Shamir shares to a cluster. We recommend taking a quick look at how it works here. Additionally, teeKeySharesStore()
functions as an atomic helper that you may need to use in case of failures during the upload of the shares to the SGX enclaves. This helper already includes a retry mechanism, with the default set to 3 retries. The last two optional arguments are nbRetry
and enclavesIndex
. These options allow you to specify the number of retries you want to perform and an array of index IDs. The payloads
argument expects you to provide formatted payloads, similar to the ones generated in the prepareAndStoreKeyShares()
function. In case you encounter failures from the enclaves in response to prepareAndStoreKeyShares()
, you can store the payloads and submit them again later, providing the index of the failed enclave in the enclavesIndex
argument to only retry storing the failed uploads from the previous attempt.
Example: Enclave 3 and Enclave 4 in cluster 0 failed to perform the upload, while enclaves 1, 2, and 5 succeeded. You can store the complete payloads and resubmit them by specifying [2, 3]
as the enclavesIndex
argument.
Last updated