@petes-fan-club
3x switches
2x firewalls
3x Kubernetes Control Planes
6x Kubernetes Worker Nodes
1x Management Server
1x Backup Server
Boxes above make up the clusters, however, we numerous instances of LCD, FCD, RPC, and API.
The cluster is controlled mostly through GitOps and we use Prometheus and Grafana with custom dashboards for real-time monitoring, plus some addition services outside the cluster that alerts us to critical issues.
Its a combination of time involved monitoring and being on standby to handle issues if they come up, consider it a retainer of service, standard practice when providing managed services to customers if they want 24/7 support, including outside of regular business hours,
Automated alerts and monitoring on the surface sounds great and catches most things, but not all, reliance on that would be fatal as things can get missed by automation.
Then thereâs the skillset involved to run such a cluster as this, being able to troubleshoot and drill down to the issue using console commands, being able to read IOPs reports and understanding distributed storage systems (this isnât raid5).
Also, thereâs the monitoring of the servers them self, maintaining BIOS updates and cluster updates, along with container image updates, some of this you simply would not want to automate as a bad update could take down critical services.
I hope that helps explain some of what goes into this.
The maintenance and support cost is for the public infrastructure we have built. Keplr doesnât provide their own full public infrastructure for LUNC, so you are comparing apples with oranges.
Every time we talk about regulation if someone refers to USA the majority of the community says that we are decentralized and we are not limited to American laws. When we talk about salaries most of the time we see rates of developers based on USA, especially west coast. Why? Do you think you deserve to be paid for your work as the most overpaid developers? Thatâs something you need to prove with the quality of your work.
For programmers, it doesnât matter where they work because they can work remotely over the internet. The reference of earnings to this professional group are developed countries, e.g. the USA
Guys, Iâm European and I work remotely as well. My point is not that you have to work for free or with the lowest salary per se. Though, what we observe here is Californian salaries as a benchmark for a decentralized chain for mediocre quality of work. Prove me wrong that something extraordinary is happening here and I will apologize to everybody felt offended. Again, I donât support anyone working for free, but letâs be honest and realistic. Itâs something concerning several validators as well lately if you watch closely the comments here.
The BIG difference here is that this is not a bunch of standalone servers running some background services, this is a clustered setup with even its storage clustered, everything is containerized, this requires a certain level of skillset to actually manage and maintain, many moving parts to this infrastructure, just go ask TFL and they will tell you that it doesnât always run smooth, sometimes the FCD will break and you need to get in and mess with SQL, sometimes terrad instances become unsynced and do not re-sync, that needs to be monitored and actioned to correct manually, though we have created custom scripts but the more load equals more monitoring and tuning required.
Also, looks like we need two more switches for fault tolerance purposes. If not mistaken in the photo one is for SAN traffic and the other for LAN so ideally would need two segregated channels for SAN and two different routes for LAN provision. Some might also argue a second firewall as well for the same reason.
How much is the rack footprint hosting cost in that DC?