Still in beta, the data can be incomplete and the website may be laggy as blocks are being processed!
What is the project about?
SteemChain in an open source application to analyse transactions and operations from the STEEM blockchain. Store these into a MySQL database and visualise with charts and tables via the web. Looking to bring data analytics to STEEM like websites as blockchain.com do for Bitcoin.
- Django Framework - The web framework used
- Django REST Framework - The REST framework used
- Bootstrap - HTML/CSS Framework
- MySQL - Database
- Create a parent class for each operation to inherit from, expand to all operation types and allow for flexibility between different operation types.
- Create dynamic MySQL queries to allow for additional data to be stored for certain specific operation types
- Store account history values like: total transfers, amounts transferred etc
- Add additional analytics for each operation type
- Overview of all operation types and changes for specific time periods
- Account dashboard
- Write api docs
For this initial release the purpose was to build a foundation to expand from. This release can be split into three different segments.
The webpage is build from a bootstrap theme on a django framework and for now is using Charts.js for creating the charts. Cookies are set to store which parameters were last used by the user. This allows for switching between different operation types while retaining the same resolution and period. jQuery is used to pull data from the server via an api.
The api allows for a second way to access all data. Developers can use the api to access the data directly for their own applications. The api is build on the Django REST api framework. All data displayed on the website is pulled via an API.
A custom made multi threaded block chain scraper was built for this project to retrieve blocks in order, sort and store the data into a database. Synchronised worker threads retrieve new blocks as long as difference in blocks retrieved between them is not greater than 5. All blocks are put into a queue waiting for sorting.
The sorter removes the blocks from the queue and stores them in a sorted buffer if they are not the current block number until their number is reached by the processing thread. The processing thread extracts all transactions and operations from the block, as well as the timestamp and block number. Operations get send to individual thread that are set up to do all processing related to that specific operation type.
This allows for a lot of freedom in different analytics with regard to different operation types without slowing down other threads. Depending on the mode (scraping/head) data is stored immediately into the database or hold up in buffers to insert less frequently to offload the mysql instance when processing many blocks.
During scraping there is a lot of load on the CPU. In head more the load to stay in sync with the chain is minimal, which allows for analytics to be run instead.
How to contribute?
The project is on GitHub so you can fork the project and submit a pull request, you can also contact met on Discord @juliank.