Full Node Configuration Testing
eos·@blockmatrix·
0.000 HBDFull Node Configuration Testing
It's incredible to think that we have been running the mainnet for almost 2 months! In that time we have been able to gain a much better understanding of the performance of the EOS application. ### Full Nodes To date, the most problematic node type for the network has been Full nodes. These are the public facing RPC API nodes that all users and dApps interact with. There is a lot of strain on these nodes as they store and process more data than any other node type and are computationally expensive due to the `nodeos` app being single threaded. One of the main culprits for Full node failure is down to the use of the `filter-on: *` config. With the wildcard, the node will attempt to store all smart contract data. There are a couple of contracts which are generating large amounts of data (spam?) into the network. Take a look at this chart:  This chart shows the top contracts with their associated actions. Notice how the top two have more activity than the `eosio` system contract actions! Well, by using the `filter-on` wildcard, all of this data will be taking up precious RAM and processing resource on the Full nodes. If we remove `blocktwitter` and `gu2tembqgage` from this chart, the landscape looks more healthy:  The problem is that using a whitelist of contracts for `filter-on` could result in legitimate new dApps not having their contract data available via major Full nodes throughout the chain. To combat this, a new config switch has been devised: `filter-out`. This will allow BP's running Full nodes to maintain a wildcard for all contracts, but to specifically ignore contract actions from known spam accounts. This certainly helps with system resources but it's a tricky subject, as this decision is down to the discretion of the BP's and there is no consensus methodology in place for who should be added to this list. ### Performance Tuning At Block Matrix, we have been A/B testing various difference system configurations to determine what increases performance and/or lowers system resource utilisation. There is one configuration improvement that has had more immediate impact than any other: **separate disk mount for data directories**. Across all of our nodes, we now have a separate mount for the `blocks` and `state`data directories. We moved from storing these directories on our [Ext4](https://en.wikipedia.org/wiki/Ext4) O/S partition, to having a dedicated [XFS](https://en.wikipedia.org/wiki/XFS) mount which we found gave us the best combination of performance and reliability. We initially moved 50% of our 6 node set across to this setup, and it's easy to see the impact this had:  We use [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) agent on each node which bubbles up stats to [Grafana](https://grafana.com/) via InfluxDB, this allows us to monitor all system resources but it's easily configurable for application monitoring too. After 2 weeks of testing, we are now confident in moving all our remaining nodes to this configuration, if you give this a try or already have a configuration like this in place, we'd love to hear about it, especially the filesystem you're using! For the techies out there, here is our current `fstab`: ``` /eosdata xfs noatime,nodiratime,attr2,nobarrier,logbufs=8,logbsize=256k 0 0 ``` ### Moving Forward There are many other improvements and techniques that we have planned for testing, it is extremely important that we don't rely on scaling vertically to handle the ever increasing demands from the network. Emerging tech such as [Optane drives](https://www.intel.co.uk/content/www/uk/en/architecture-and-technology/intel-optane-technology.html) is exciting, but it is imperative that the `nodeos` application is continually improved to use all available resources on the host machine. --- [Block Matrix](https://blockmatrix.network) are currently a paid standby BP for the EOS network. We are super passionate about the EOS project and are focussed on creating robust infrastructure and open sourcing everything we build to support the network and the wider community. [Github](https://github.com/BlockMatrixNetwork) [Telegram](https://t.me/blockmatrixnetwork)