I'll try to answer the questions in order.
- Frankfurt Germany - Digital Ocean Droplet
- It should state that, since it's variable it was vague, but I'll clarify, but the standard tests are 30 second timeout / time limit
- I was originally running it on my dev machine, That was the first run from the server, it should populate over the week. If it's still not right this time next week, will adjust as needed, but you are probably right, I wasn't sure exactly how I wanted the trends section to be, so if any of them need it badly, that's one that will get refactored.
- Latency is the average time of 5 samples of:
start_time = time.time()
# Make a simple query to measure latency - use a lightweight call
api.find("tokens", "tokens", {"symbol": "SWAP.HIVE"}, limit=1)
latency = time.time() - start_time
I would love some more input, kind of went into this blindly, and the original intent was to do what we did with @nectarflower with @flowerengine, which was run a benchmark every hour and store the results in the account metadata.
curl -s --data '{"jsonrpc":"2.0", "method":"database_api.find_accounts", "params": {"accounts":["flowerengine"]}, "id":1}' https://api.hive.blog | jq '.result.accounts[0].json_metadata | fromjson' | jq '.nodes[]'
@nectarflower runs on the :00 minute mark, and @flowerengine runs on the :30 minute mark, to avoid clash. Both benchmarks take ~10 minutes to do all the servers.
Would absolutely be thrilled to get a PR if you see something that could be done better. Also, I am aware the head section of this post was printed twice, that should be fixed on the next run, don't know how I didn't catch that the first 20 times I ran it.
Thanks for the answers. I would love to do more PR's, but lately, time to reply is already a luxury. I will keep giving this a watch and provide any feedback, and if I find anyone keep on helping out, will point he/she in your direction.
I have noticed also you guys used python 3.13 (which is very recent). Would be nice to support older versions, but its not a problem if not. As in a few months, all pythons everywhere will be supporting 3.13 anyhow.
Just a thought.
I appreciate the feedback just as much as a pull requests.
It just happens to be the version i'm using, there is no 3.13 specific features being used, should work without problems back several versions (i've tested 3.11, and 3.12)
Not for those... but for 3.10 there is. Close enough though =)
Unfortunately, Ubuntu 22 is still behind. Plus a couple of other Linux distros... either way this would need to be changed on the repo.
$ cat src/engine_bench/__init__.py """Nectar-bench package for benchmarking hive-engine nodes using nectarengine.""" import sys __version__ = "0.2.0" # Check Python version if sys.version_info < (3, 13): print("Error: engine-bench requires Python 3.13 or higher") sys.exit(1)
I can install 3.X whatever but just thinking on people that are just using Linux distributed versions.
I am pushing because I see good stuff... not because I dislike stuff. 😎😍
good catch, i'll destroy that section in both benchmarks
Thanks!