In Spark Standalone mode, there are master and worker nodes.
Here are few questions:
- Does 2 worker instance mean one worker node with 2 worker processes?
- Does every worker instance hold an executor for specific application (which manages storage, task) or one worker node holds one executor?
- Is there a flow chart explaining how spark works on runtime, such as word count?