Demystifying Google Global Cache

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail


The amount of data Google serves through the Internet is undoubtedly enormous and their network is correspondingly efficient in doing so. One key component of Google’s information superhighway is their CDN-Like Cache infrastructure: the Google Global Cache. We ran some tests with ProbeAPI and had a closer look at this vast network, which enables us to enjoy Youtube at the best quality our ISP’s capacities allow.

Our experiment showed us the extent of the deployed network in its worldwide coverage. We were able to observe the impressive number of 2383 Cache Instances across 800 locations all around the globe. It is important to take into account, that there may be more locations where a particular ISP wasn’t available through any probe when we ran the experiment or, for example, we have no probes in an ISP which runs through a particular Cache Server, especially in remote locations where the presence of ProbeAPI Probes is still scarce.

We used all the available probes at the moment we ran the experiment, delivering a total of 240184 individual results, which were grouped and cross-linked to obtain relevant data.

The cache locations are codenamed with three letter airport-codes. Cross-linking the Airport codes from the IATA with the codes obtained from the Cache-Servers’ names, we were able to obtain their approximate geographical location.

Continent

Detected Cache Instances

Asia

740

Europe

620

North America

487

South America

347

Africa

103

Oceania

81

Central America

5

 

The top 20 Countries with the highest number of detected Cache Locations are:

 Continent

Country

Detected Cache Instances

North America United States 296
Asia Russia 263
South America Brazil 220
Asia India 83
North America Canada 76
North America Mexico 70
Europe United Kingdom 67
Asia Japan 63
Europe Ukraine 59
Asia Thailand 51
Oceania Australia 48
Europe Poland 45
Asia Indonesia 38
Europe Germany 36
South America Argentina 31
Oceania New Zealand 27
Europe Spain 27
Europe Italy 27
Europe France 26
Asia Bangladesh 24

 

Top 25 Cities with the most Cache Locations detected:

City Country Detected Cache Instances Detected Networks Ratio Networks/Caches
Moscow Russia 42 918 21,9
Sao Paulo Brazil 31 740 23,9
Tokyo Japan 31 116 3,7
Rio De Janeiro Brazil 28 322 11,5
Kiev Ukraine 28 277 9,9
London United Kingdom 24 347 14,5
Dhaka Bangladesh 22 51 2,3
Bangkok Thailand 21 28 1,3
St. Petersburg Russia 18 102 5,7
Sofia Bulgaria 17 104 6,1
Yekaterinburg Russia 17 84 4,9
Buenos Aires Argentina 17 79 4,6
Jakarta Indonesia 17 28 1,6
Bucharest Romania 15 92 6,1
Belgrade Serbia 15 42 2,8
Budapest Hungary 14 123 8,8
Sydney Australia 14 54 3,9
Mumbai India 14 46 3,3
Montreal Canada 14 43 3,1
Auckland New Zealand 14 39 2,8
Warsaw Poland 13 318 24,5
New York United States 13 276 21,2
Novosibirsk Russia 13 76 5,8
Toronto Canada 13 71 5,5
Kuala Lumpur Malaysia 13 28 2,2

To give ourselves an idea of the number of users covered by our detected servers, we have ranked the 25 top countries in terms of the estimated number of users.

Country Detected Cache Instances est. Number of Users Users/Cache
India 83 236.000.000 2.845.000
United States 296 219.000.000 741.000
Brazil 220 105.000.000 478.000
Japan 63 93.000.000 1.478.000
Russian Federation 263 78.000.000 298.000
Indonesia 38 66.000.000 1.733.000
Germany 36 66.000.000 1.826.000
Nigeria 8 61.000.000 7.666.000
Mexico 70 61.000.000 870.000
France 26 52.000.000 2.000.000
United Kingdom 67 51.000.000 764.000
Egypt 13 45.000.000 3.455.000
Philippines 11 41.000.000 3.715.000
Vietnam 16 40.000.000 2.496.000
Turkey 1 35.000.000 35.212.000
Spain 27 34.000.000 1.271.000
Italy 27 34.000.000 1.268.000
Bangladesh 24 32.000.000 1.321.000
Colombia 22 30.000.000 1.375.000
Argentina 31 30.000.000 976.000
Pakistan 3 27.000.000 9.147.000
South Africa 14 23.000.000 1.642.000
Poland 45 22.000.000 500.000
Kenya 6 21.000.000 3.517.000

* Thanks to APNIC for helping us estimate the number of users in each network.

The following map pinpoints all the instances of Google Global Cache we could observe. Please remember that the pins are pointing to the cities’ airports and not their precise location within the city, which is close enough for describing their location for these purposes.

From the huge number of Probes we applied for the experiment, we got an impressive array of the variety of networks connected to each Cache Location.

When clicking on the pins, we can see a list of cache instances and below the corresponding list of connected networks to each cache.

Top 25 Cities with the most networks detected:

City Country Cache Instances Detected Networks Ratio Networks/Caches
Moscow Russia 42 918 21,9
Sao Paulo Brazil 31 740 23,9
Chicago United States 11 511 46,5
Frankfurt Germany 7 475 67,9
Washington United States 10 470 47,0
Paris France 12 405 33,8
Amsterdam Netherlands 8 383 47,9
Dallas-Fort Worth United States 9 356 39,6
London United Kingdom 24 347 14,5
Rio De Janeiro Brazil 28 322 11,5
Warsaw Poland 13 318 24,5
Prague Czech Republic 10 288 28,8
Kiev Ukraine 28 277 9,9
New York United States 13 276 21,2
Los Angeles United States 11 262 23,8
Miami United States 6 236 39,3
Atlanta United States 7 177 25,3
San Jose United States 3 159 53,0
Milan Italy 4 158 39,5
Katowice Poland 4 145 36,3
Belo Horizonte Brazil 12 131 10,9
Madrid Spain 9 125 13,9
Budapest Hungary 14 123 8,8
Tokyo Japan 31 116 3,7
Mountain View United States 2 109 54,5

We can observe, that there are locations with a high number of access points and also a high number of networks, but like in the case of Moscow or São Paulo, the relationship between the number of networks vs. number of access points is very high. A notable case is Tokyo, where 116 ISPs for 31 Cache Access Points were observed, giving a ratio of only 3,7.

It has to be taken into account that many access points are dedicated to different segments, for example, if we look at Berlin, Germany we will see that Vodafone and Telefonica Germany have their own dedicated access points (vodafone-ber1 and hansenet-ber2). On the other side, ber01s12 is probably owned by Deutsche Telekom AG, the biggest Telecom in Germany, which is giving access to other companies through their infrastructure, so we can see Verizon, Kabel Deutschland and also some other Telefonica users go through this other Access Point. Finally, if we observe ecix-ber1 we can see that that point seems to be reserved for Private Business ISPs, like e.discom, WEMACOM, the multimedia specialist MyWire or the private IT Infrastructure provider Macnetix.

Another important fact that has influence on the number of detected networks is our own coverage with ProbeAPI. In cases like Moscow, ProbeAPI has a high number of active probes in Russia, with Moscow having the highest concentration of them. That’s why this data has to be considered a snapshot of Google’s CDN taken with all our available probes at one particular time.

Following the same line, an interesting case is Turkey, where we could detect only one Cache Location, despite of having typically a high concentration of active Probes there. Turkey has been blocking access to Youtube and other Google services intermittently during the last years following political controversies, which could explain the low number of results obtained from a well covered region.

Conclusion

There is much mystery surrounding Google Global Cache which is one of Google’s most important pieces of infrastructure. We were able to gather an impressive number of information with one single measurement of ProbeAPI, which helped us to understand the extent and distribution of Google’s Cache Locations.

Taking into account that this information was made snapshot-like with one sole measurement, we gathered 2383 Cache Instances across 800 locations worldwide (at least) that Google is actively using throughout their partner network operators.

There are still some issues to be covered by more measurements over time which will surely reveal more networks and cache locations. This is one case where continuous monitoring will offer a good opportunity to get thorough measurements, not only in terms of traffic variability, but also in terms of coverage, as probes connect to ProbeAPI through different locations during any period of time.

Fill out my online form.


Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Towards an LMAP Specification of ProbeAPI.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

In an effort to bring ProbeAPI nearer to the internet measurement community, we’ve been paying close attention to the new LMAP specification for internet measurement platforms. LMAP is being defined with the goal of standardizing large scale measurement systems, in order to be able to perform consequent measurements among diverse entities. They may even differ in implementation details, but complying to this standard opens the possibility of making the components, results and instructions comparable.

“Amongst other things, standardisation enables meaningful comparisons of measurements made of the same Metric at different times and places, and provides the operator of a Measurement System with criteria for evaluation of the different solutions that can be used for various purposes including buying decisions (such as       buying the various components from different vendors). Today’s systems are proprietary in some or all of these aspects. “ – RFC 7594, July 2015

In order to find out how compliant or non-compliant ProbeAPI might be toward this standard, we started a design and implementation comparison in terms of an LMAP system. In this post we will focus on the general outline of the system, oriented to its main components, their roles and data flow. A detailed comparison for a data model and measurement methods will have to remain pendant for a dedicated post, since they are very extended topics.

The general working scheme of ProbeAPI includes most components from the LMAP specification in very similar roles:

The user with the API makes a measurement request. The API, hosted in the cloud, then communicates the testing instructions to the Controller Interface, which will forward the testing instructions to the Bootstrapper and Controller outside the cloud. The Bootstrapper part is in charge of integrating the probes to the whole system and updates the database to keep track of the disconnecting probes. It is implemented using an XMPP server, which uses a sleek protocol and allows for all the probes relevant to a particular measurement to receive the message simultaneously.

The probes themselves report their online status directly to the API, while the Bootstrapper keeps track of the ones that disconnect. The probes receive the measurement instructions from the Controller. After carrying them out, they will send the results directly to the API to be delivered to the user.

LMAP Scheme for ProbeAPIThe Controller and Bootstrapper component mixes the Controller part, which is an element inside the scope of LMAP while the Bootstrapper lies outside the LMAP scope.

 

When a new probe becomes online, it generates its own unique ID which will be sent together with the results, where they can be separated in terms not only of ProbeID, but also ASN or Country. Then it calls the login method from the cloud interface so it will be accounted as online. When a Probe logs off, it is the Bootstrapper service which accounts their disconnection to the Database.

Interaction Diagram for MA-Login, Measurement Instruction and MA-Logout.
Interaction Diagram for MA-Login, Measurement Instruction and MA-Logout.

When a measurement instruction is sent, the Control Protocol is an XMPP instruction which can contain, for example, the following information:

  • <Task-ID>Task-ID
  • <MA-ID>Probe-ID
  • <suppression>TimeOut
  • <instruction>Command
  • <parameter> host_address
  • <parameter> ttl
  • <parameter>count
  • <parameter>timeout
  • <parameter>sleep
  • <parameter>BufferSize
  • <parameter>fragment
  • <parameter>resolve
  • <parameter>ipv6only

There is a Task-ID generated from the API, which is passed over to the probe with each measurement. When the results are collected, they are easily recognized.  Failure information from the Measurement Agents will be included in the results.

Here is an example of the results header obtained for httpget measurements:

  • HTTPGet_Status
  • HTTPGet_Destination
  • HTTPGet_TimeToFirstByte
  • HTTPGet_TotalTime
  • HTTPGet_ContentLength
  • HTTPGet_DownloadedBytes
  • Network_NetworkName
  • Network_LogoURL
  • Network_CountryCode
  • Network_NetworkID
  • DateTimeStamp
  • Country_Flag<url>
  • Country_Name
  • Country_State
  • Country_StateCode
  • Country_CountryCode
  • Probe-ID
  • ASN_Name
  • ASN_ID
  • Location_Latitude
  • Location_Longitude

The possible measurements at the time are:

ICMP (ms) , HTTP-GET (ms), Page-Loading time (ms), DNS Query Time.

The API itself doesn’t offer scheduling functions yet, but they are being implemented. Since ProbeAPI’s measurements are active. Each MA measures normally one flow per instruction. The report Data can be presented Raw or formatted in Json. There are also plans to implement scheduling also for reports. Right now reports are immediate.

There is also no Subscriber Parameter DB, since this information is delivered directly with the results from the probes. AS-Number, Country, AS-Name and Geographic Location are provided directly with the results.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

A study on the coverage of ProbeAPI and RIPE Atlas

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Ripe Atlas has been successful in establishing a fairly well extended network of measurement Probes. They are placed in different environments, which can be server rooms, volunteers ‘offices, universities and households. Since the placement of a probe requires a physical device to be installed, the deployment and growth rate of the network is limited to the available physical distribution capacities and the cost of producing enough physical devices. On the flip side, this quality of being a hardware based measuring platform, not only guarantees a stable availability of the probes, but also there is a genuine piece of hardware that allows any customizations the measurements may require.

Top 20 Atlas by Users

Although Atlas has already achieved an impressive number of deployed probes, there are still large networks in need of coverage.

ASN Country(ISO 2 letter code) Users(APNIC Labs estimate) RIPE Atlas probes(online)
AS4134 CN 336 million 2
AS4837 CN 204 million 0
AS9829 IN 66 million 0
AS7922 US 55 million 336
AS17974 ID 47 million 1
AS8151 MX 39 million 4
AS24560 IN 33 million 5
AS8452 EG 33 million 0
AS4713 JP 30 million 8
AS7018 US 29 million 40
AS9121 TR 27 million 8
AS3320 DE 26 million 206
AS28573 BR 24 million 20
AS45595 PK 23 million 1
AS9299 PH 22 million 5
AS9808 CN 21 million 0
AS701 US 20 million 80
AS45899 VN 19 million 1
AS18881 BR 19 million 8
AS4766 KR 18 million 8

In this respect ProbeAPI can provide much relief. Because of its software-based nature, it has many complementary features that provide very interesting strategic flexibilities. For example, its deployment has a very low cost: it only requires the installation of a piece of software on a Windows computer. Being able to measure real user’s connectivity is a big advantage, but at the same time the normal usage of computers make ProbeAPI instances very volatile: personal computers go on and offline for different reasons during their normal usage.

We can note by observing both graphs, that there are still large networks with little coverage from ProbeAPI. ASNs 4134 (China Telecom), 4837 (China-168) and 9829 (India Telecom) are good examples of large networks with a comparatively small number of probes.

Nevertheless, ProbeAPI’s easy deployment gives us the possibility to be present in networks where too little or no physical probes have been installed.In our measurements, the number of available probes in ProbeAPI at a given moment is around 84000. During a normal usage day, more than 290000 became online. Although not all probes are online all the time, the number of available probes at a given moment is almost 8 times RIPE Atlas’ active probe count. This counterweighs the volatility problem of ProbeAPI’s instances, but for longer measurements from a static set of probes, the stability of Atlas Probes is an important fact to take into account.

It is important to remark that this comparison does not intend to establish technical superiority of one system or the other. Quite at the contrary, during this analysis we realized that in this respect, Atlas and ProbeAPI may contribute complementary features for measuring networks. For low-coverage, physically or politically hard to reach networks, a software solution like ProbeAPI may be a viable alternative in order to be able to first expand our general measurement coverage. Once a region starts installing more Atlas probes, longer measurements with fixed sets of probes become available thanks to Atlas’ more stable probes.

In this stage ProbeAPI’s end-user perspective can provide the convenient point of view of the last mile’s conditions. Combining the stability and precision of Atlas’ probes, with the massive amounts of possible measurements from end-user perspective, we can get a very well detailed portrait of the network’s condition.

Currently there are around 74000 active probes from ProbeAPI and Atlas monitoring the same ASs. ProbeAPI has around 14000 probes measuring networks where Atlas isn’t present. On the other side, Atlas has around 1700 probes where ProbeAPI is absent. Combined they give a grand total of around 95000 active probes able to measure networks serving almost 2.9 Billion users.

 Conclusion

The software-based design of ProbeAPI helps us achieve a vast coverage, even achieving the impressive number of 4000+ available probes for a single AS. Of course, the natural instability of the probes is an inherent constraint of ProbeAPI’s architecture, but that is the trade-off in exchange for a very extended and fast growing measurement network.

On the other hand, RIPE Atlas is designed around physical devices installed in diverse locations by hosts. This physical design brings the inherent stability a physically independent device can provide. Probes can be placed strategically in different points of the net other than only end-users, where measurements can reveal valuable information about the net’s conditions. All This requires some host recruiting, so this distribution process is naturally slower than a software one.
There are essential architectural differences between ProbeAPI and RIPE Atlas. Both systems were designed with a similar set of measurement features in mind, but their differences in design end up opening different doors, which in return give us the possibility of observing the net from a large number of diverse vantage points.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Testing Google Cloud Platform CDN Interconnect with CloudFlare on ProbeAPI

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

As many of you might have already heard, Google has introduced a new cooperation program with four CDN providers: CloudFlare, Fastly, Highwinds and Level3. The Google Cloud Platform CDN Interconnect Program consists on giving CDN providers access to route their traffic through Google’s private high-speed links, so they can serve their customers through reliable, low-latency routes thanks to Google’s infrastructure.

After reading the news, the ProbeAPI team got curious to find out how much of a performance gain is there to expect using this service, if any at all. Taking advantage of the large number of probes available at ProbeAPI, we set up an experiment to put this interesting new infrastructure to the test.

We used Amazon’s S3 and Google Storage as cloud storage providers and CloudFlare as CDN. To test similar routes, we chose the server in Singapore for our Amazon S3 bucket and the Asia server for the Google Storage bucket. We chose a maximum of 100 Probes in the USA as destination for our transfer tests.

After connecting and configuring the buckets to make them accessible through the CDN, we put the files to be tested in the buckets: several randomly generated 1.1MB PDF files, since PDF is one of the file extensions cached by CDNs.

Our objective is to measure the transfer times of those files and find out how long does it take to the CDN to cache those files for each storage provider. That means: we want to compare the transfer times of a cached file vs an uncached file from each bucket. We take the difference of those transfer times and we get the time it takes the CDN to cache a file, based on the delay caused by the caching and assuming that a previously cached file will transfer faster.

We test two files per bucket, let’s call them A and B. We make a pre-test running a Http Get test from ProbeAPI calling only the A file from both buckets. Running this pre-test made file A get cached in the US Servers of CloudFlare. Now we are in condition to run the real test. So we ran an Http Get test using ProbeAPI with files A and B on both buckets. So this is what happens: We cached file A with the pre-test, because of this, file A is expected to transfer faster than file B, which will have to be transferrred from Asia to the US CDN servers like file A did on the first test. Because this takes a bit extra time to do, we can calculate the overhead caused by the caching.

Results

After just a few tests with ProbeAPI, the first thing that strikes you, is the amazing speedup in transferring files preloaded in the CDN cache, especially for Amazon servers. There was also a noticeable improvement in the uncached Amazon’s performance after the 5th Test. Either because of sudden changes in network conditions or some load balancing mechanism reduced the caching overhead enormously because of that route being used repeatedly by hundreds of probes across the US.

Now getting to the point. We can observe Amazon’s amazing speedup when the file is already cached, even surpassing uncached Google Storage performance after some tests, which is already fast by itself.

Google’s buckets performed well altogether and that’s where we can clearly see the power of this infrastructure. The overhead introduced by the caching process when using Google + CloudFlare is minimal compared to the one introduced from Amazon + CloudFlare. This is due to the evident performance upgrade brought by this new partnership with Google, with CloudFlare now being able to use Google’s infrastructure to transfer data from the Datacenter to the CDN in the blink of an eye.

Caching Overhead Asia to US

We decided to run the tests once more, using the same methodology but transferring files from US servers to probes located in the US as well. This is a very likely scenario, thus making this set of measurements very interesting.

CDN Comparison US to USHere we can observe the expected scenario again: Uncached files take longer to deliver than files already cached in the CDN. In this case the difference is (also expectedly) less dramatic, due to the US-US traffic routing. The uncached files take similar amounts of time to load, although there’s still a noticeable overhead improvement when measuring transfers from Google’s buckets.

Even with your content being available locally in the US itself, the benefits of this CDN-Google partnership are still evident and relevant.

Analysis and Conclusion

We are living exciting times, with the Internet becoming ever faster and adopting more sophisticated connectivity year after year. This is one example of how the Net is adopting optimized structures. Some years ago we wouldn’t have dreamed of having our content available practically locally everywhere in the world, that’s one for CDNs.

Now with this cooperation, not only that is possible, but also your newest and updated content gets much faster to its destination and this is where the major beneficiaries of Google’s Interconnect platform lie, whose content is constantly changing, updating, adding new files and want them to be rapidly distributed for a virtually seamless availability all over the world.

Even with your content travelling shorter distances, like our US-US test showed, the benefits of serving customers faster and more reliably are still very noticeable and could be critical in certain scenarios: e.g. during a flash crowd, when your content (or part of it) becomes highly popular overnight. This would be a critical situation where you want to serve everybody without decreasing the quality of your service. The best part is that it works automatically and such a scenario that haunted administrators in the past, is becoming less and less fearsome thanks to CDNs and now even updated content is able to reach its destination with a short overhead.

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail