A limit you will run into data wise when doing BIG things is the fact that every property value for every version of a page is maintained, there is some cleanup in place (don't know if it was there for v8 yet) but if you disable this, you might hit the int limit of that table (2,147,483,648) (lets round it down to 2bilion) before it goes into negative which might cause some weird behaviour.
This means that on Mark's example that if you have 20 000 nodes with 100 properties per node you can still have a 1000 versions. But In my knowledge this combination of things is most likely the first hard limit you will hit.
Besides that it is like
@Sebastiaan says, it depends on the size of your data, how its structured and how fast your hardware is. In general: Big nested data (think nested content + multilingual/multi segmented) will be harder to process than linked separated data pickers on nodes to other pickers. How you query for data also impacts how much processing power you need
https://docs.umbraco.com/umbraco-cms/reference/common-pitfalls#querying-with-descendants-descendantsorself