polkadot, kusama, blockchain, pallet, database, substrate, rust
### Prize Title
### Prize Bounty
$8,000 USD worth of ERC20 BLZ, at time of payout
### Challenge Description
Bluzelle provides decentralized data services, powered by the Cosmos blockchain. Our services include a key-value-store (CRUD), oracle, and NFT. We are also building toward providing support to EVM (Ethereum Virtual Machine) and Polkadot support for our services. Our bounties reflect our aggressive approach to consistently improve our ecosystem and value proposition.
While the Hackathon has a specific start and end date, we are ok with work continuing after the hackathon, for the chosen winners to finish their projects to our standards.
We need to build a “pallet”, using Polkadot Substrate technology. This pallet is like a module, and would get included into a “Parachain”. A parachain is a blockchain that is part of the Polkadot ecosystem. Parachains use a blockchain-building framework called Substrate. Substrate is written in Rust. Pallets are also written in Rust. As such, for testing, you’d likely build your own mini “test” blockchain with Substrate, and test it with your pallet. The pallet would be built and tested to work with any parachain. This test blockchain would be the place to test that the pallet works properly when dropped into any arbitrary parachain.
The pallet would have two primary tasks:
Expose the Bluzelle DB and Oracles APIs to any parachain that includes it. Once included, apps on that parachain can use the DB and Oracles via the API.
Be configured to act as a bridge back to the Bluzelle blockchain, which is written in Cosmos, and runs completely separately from the core Polkadot ecosystem. It is important to emphasize that the bridge is a large part of the effort here.
The pallet would likely need to define a message format to handle requests and pass them onto the Bluzelle blockchain, as well as to be able to receive responses from the BluzelleChain and send them back to their recipients on Polkadot. The pallet uses OCW (off-chain workers), that in turn make web connections (http, etc) to a Bluzelle endpoint (typically a sentry that exposes the REST and/or RPC ports) to make the requests to, and get the responses from.
An idea for performing TDD tests to verify your pallet on your Substrate chain is as follows:
A test Substrate chain that uses your pallet. The server (of the pallet) side would be stubbed out with placeholders (basically, you are faking the Bluzelle endpoints, initially). The stub would stand in for what would eventually be a message sent to Bluzelle (ie: the off-chain worker, that eventually gets the message to Bluzelle, serviced, responded to, etc). In this case, the stub just dumbly responds with a contextually appropriate message, that gets sent back through the pallet to the client (the Substrate chain). The idea here is the pallet is the real thing but the output interface simulates what will eventually take place. Once you have all this in place, you can then implement the actual off-chain worker, that is the “meat” of the code, that talks to Bluzelle, and responds back with replies.
As an example of an existing pallet that does something similar:
A few important notes:
For the hackathon, our Stargate testnet does not yet have Oracles. So to support Oracles, you will need to use our older “LaunchPad” testnet instead. It is called TestNetPublic.
DB functionality should be written to use our Stargate testnet.
It may be preferable to use a Polkadot testnet or Kusuma, vs using the Polkadot mainnet, for testing and building this submission. This is acceptable. Be sure to clearly document where your submission should be tested.
Our JS library:
To install our libraries:
install @bluzelle/sdk-js with “yarn” or “npm”
### Submission Requirements
The submission should include sufficiently QA’d documentation on how to deploy the service/product, and how to use the submission as per the requirements of the bounty.
These should include documentation on the commands to be used to interact with the submission, and how the submission is configured to work properly with BluzelleDB, etc.
A video demo should be included. It would nice to have a voice-over in English where we can fully understand the submission, but this is not a strict requirement. A computer-generated voice over is ok too, if you prefer.
The demo should also walk through the code and explain all the items that are being provided. The demo should walk through the process of deploying the submission, and how to use it, etc.
It is expected that the documentation is accurate. We will follow your documentation, to properly evaluate the submission. If it is incorrect, we may be unable to fairly evaluate your submission.
Including tests with your submission will greatly improve your chances of winning. We like to run tests that are highly verbose and explicit in terms of what they are doing, so we can gain confidence in the correctness of what you have submitted. If you provide tests and expect us to run them, like everything else, document it well, and ensure that the tests can be run by us -- give us the steps to setup and run the tests.
If the documentation is incomplete or incorrect, there is a possibility that we may not be able to fairly assess the submission, as we will walk through the documentation to validate the project. Due to practical limitations on time and resources, once a project is submitted, we are not able to provide much assistance in correcting a project’s that may not be properly working, nor to inquire to get proper steps, if the documentation that comes with a submission is insufficient.
Your project will be judged based on what you submit. Please submit something that is complete, well thought out, and tested, from a documentation and product features and code quality standpoint. We will do our best to evaluate, but obviously, the easier you make our life, the better the chances are that you win.
WE LOVE VERBOSITY AND DOCUMENTATION. There is no such thing as too much information. Explain what you have built, and please ensure it will run CORRECTLY, when we follow your directions literally. Just doing this alone will vastly improve your chances of victory.
### Judging Criteria
Our goal is to, as part of the evaluation process, fully setup, and use your submission, successfully, and without any major hiccups.
Based on the ease of doing this and the quality of your documentation, product, code, and features, we will choose the winner.
There is no preference to ordering of submissions -- just be sure to submit them on time. Once submitted, we will evaluate and there will not be alot of opportunity for back and forth. Please ensure your submitted documentation and code is complete, enabling us to properly judge it based on its merit.
We will choose the best based on quality. Documentation and properly written code is a large part of the criteria. A project that we cannot deploy ourselves is difficult to give a prize to. We will do our best to contact you, if there is an issue. Practically, we probably won’t have much time to contact you, after submission, to get clarification or to ask you to fix a bug. It ideally should work when we judge it.
Note: While the descriptions given for bounties are quite explicit and even tend to suggest how an actual solution to each problem can be built, you as a developer have the option to architect the solution your own way. We have provided guidance on a solution we see as reasonable, but we are open to considering other solutions. Obviously, we will choose the best overall submission, based on various factors including the elegance of the solution.