When Weaverize started in 2015, there was no ready-made cloud solution, so we developed our own based on CoreOS (now deprecated) instances clustered by ETCD. The services were registered on the Consul service and served by Haproxy. The services were managed by Fleetctl which made it possible to deploy SystemD services in a cluster. All of our architecture is available in open source on the repository https://github.com/auguster/coreos-consul-haproxy
Our largest Kubernetes cluster is on-premise, it runs more than 145 pods spread over 6 nodes. This is a very heterogeneous cluster with x64 and arm64 machines including Cuda graphics cards for neural network learning and inference. Storage is provided by a Ceph cluster (rook-ceph) allowing data redundancy, better use of disk space and better performance than RAID. Our entire Ceph cluster has a regular differential backup on a remote machine to recover data in the event of an overall cluster crash.
Since the fire in the OVH data center in Strasbourg in March 2021, Weaverize no longer relies on OVH as its primary cloud provider. A hybrid cloud strategy has been put in place with on-premise servers and, if necessary, OVH Public Cloud instances reinforce the cluster from time to time in the event of a peak load. We are in the process of extending this mechanism to other cloud providers (Azure, AWS, etc.) to guard against the failure of a single cloud provider.
Here are some of our works in cloud domain:
Easy and collaborative online video editing application, it allows video editing directly in the cloud. This cloud was based on our old infrastructure and used OVH Object Storage. We have since migrated it to Kubernetes and Ceph. During the launch of our video editing platform, we participated in the development of OVH’s Public Cloud GPU offer.
Weaverize internal cloud described above supporting all Weaverize software infrastructure (gitlab, nextcloud, mail, this site…) as well as customer developments.