Gitcoin is GDPR complaint. Learn more in
Gitcoin's Terms & Conditions.
Check out the Issue Explorer
Looking to fund some work? You can submit a new Funded Issue here.
### Scaling Plasma
Because security of the chain is ensured by the exit game, it is safe for operators to scale past the computational limits of consumer hardware. Now that we have a reasonable Plasma implementation, I think it's time to see just how far we can push this thing!
### Current Architecture
The Plasma Chain operator as it stands is a single `node.js` application. However, internally it has three separate processes:
1. `Express API` / `Eth Service` -- Parent process
2. `State Manager` -- Child 1, makes use of a LevelDB database & an append only log
3. `Block Manager` -- Child 2, makes use of a separate LevelDB database & ingests the append only log
This separation of concerns is critical to the long-term scalability of Plasma Operators as it allows for different functions to be easily processed in parallel. Each component when benchmarked alone can already achieve thousands of transactions per second, & with a little love I'm sure could go way faster.
The following is a rough diagram of what you can find in: https://github.com/plasma-group/plasma-chain-operator
![untitled diagram 4 -page-1](https://user-images.githubusercontent.com/706123/52077799-b66c4c80-2546-11e9-9f51-217afb63ce71.png)
### Hand-wavy AWS toolstack architecture
This is a rough outline of how one might port this design to a cloud provider, in this case AWS. The general structure remains the same; however, message passing is more sophisticated (rabbitmq) and signature checking is trivially parallelizable (aws lambda). The significant downside of this architecture is it's *expensive*, so if there are better approaches I'd love to hear them.
![untitled diagram 4 -page-2 1](https://user-images.githubusercontent.com/706123/52077763-9b014180-2546-11e9-9f56-3bec2814d405.png)
### The ask
The ask for this issue is to take the first steps to realize the vision of a more scalable operator. That means breaking out the API, tx-log, state, and block managers into their own Docker containers, handle message passing between them with some message queue, and set up some deployment script.
That said, comments on the overall architecture & approach to scaling are more than welcome. I've had very little peer review!! eek!