java - What is the principle ? When Spark deal with data bigger than memory capacity? -
as know , spark use memory cache data , compute data in memory.but if data bigger memory? read source code ,but don't know class schedule job? or explain principle of how spark deal question?
om-nom-nom gave answer, comment reason, thought i'd post actual answer:
https://spark.apache.org/docs/latest/scala-programming-guide.html#rdd-persistence
Comments
Post a Comment