Earlier this week, Google updated the Googlebot documentation to include information about limits on the size of the scanned file.
It may seem strange, but after learning that the robot only scans the first 15 MB of the content of an html file or a supported text file, many webmasters panicked. Many decided for some reason that 15 MB of raw HTML per page was not enough, and began to write to Google support, demanding to comment on such restrictions, says SearchEngines.
Google explained that 15 MB of an HTML file is a huge amount. This doesn’t include uploading videos, images, etc., it’s just HTML source code. This is a fairly high threshold, in fact, it has been like this for many years, just now information about this has been added to the documentation.
“We added it to our documentation because it might be useful to some people when debugging, and because it rarely changes,” according to a Google Search Central blog post.
For most webmasters, this means nothing, since there are very few pages on the Internet that would be larger than the specified size, ntes NIX Solutions. The average size of an HTML file is about 500 times smaller: 30 kilobytes (KB).