Spinning up a self-hosted cluster of Prebid Servers requires some up-front-planning. The components that will be needed are highlighted in this hardware layout diagram:
Assuming you need to serve more than one geographic region, you’ll need to utilize a Global Load Balaning service so your users will hit the servers in the region closest to them.
Once the users have come into their nearest server cluster, a load balancer will direct them in one of two ways:
/cache, they should be directed to one of the Prebid Cache servers.
These servers will have a mix of network and CPU work. They benefit from a fair amount of memory so they can cache stored requests and many versions of the GDPR vendors list.
Other services you may want to run alongside Prebid Server are:
The PBC servers consume very little CPU or memory - they just translate between Prebid protocols and the chosen No-SQL system that implements the storage cluster.
You can setup Redis, Aerospike, or Cassandra. How many you need will depend on the expected traffic, your traffic mix, and the average length of time that objects are cached.
Account information and StoredRequests are stored in a database queried by Prebid Server at runtime. PBS has an internal LRU cache for this database, so it only queries when there’s an account or stored request it hasn’t seen recently.
Getting data to each of the regions likely involves setting up a source database that replicates to each region.
Note that there aren’t any open source tools for populating this database. Each PBS host company establishes their own methods of populating data from their internal systems.
You’ll want to hook both Prebid Server and Prebid Cache up to an operational monitoring system.
The process for actually installing and configuring the software will differ for the Go and Java versions of the software. See the relevant section as a next step.