The premise of parallel computing is to divide large jobs up into smaller tasks which are serviced in parallel by several servers. The premise is intuitive, but there are a range of different models for parallel computation, and seemingly small differences in the relationships of queues and servers, and arrival and service process assumptions, can lead to drastically different scaling properties.
The current dominance of the MapReduce architecture, and the implementation of many processing engines based on it, leads us to seek understanding the behavior and scaling of these systems at a fundamental level. There is a choice of abstract models for parallelized systems, ranging from the Split-Merge model which has very unfavorable scaling properties, the Fork-Join model which has been a popular subject of analysis, to the Non-Idling Single-Queue model which is more representative of a MapReduce task manager and sees very good gains from load-balancing. In fact, depending on how a MapReduce program is implemented, it can exhibit scaling behavior ranging from Split-Merge, to Non-Idling Single-Queue. Ideally, having k parallel servers would make execution of any job k times faster, but, both in theory and in practice, this ideal is difficult to achieve. The key feature that keeps these parallel systems from having ideal scaling behavior is an irregular division of work between the tasks. Under most models a job is not complete until all of its tasks finish service. Therefore one disproportionate, or ``straggler'', task can dominate the job sojourn time. Similarly, a task stuck in a queue behind a straggler task from another job will dominate its job's sojourn time. If all tasks had identical service times, a parallel system would behave like a conventional queue. In analytical models, in order to strike the balance between tractability and triviality of the problem, researchers usually assume that the service times of the tasks are independent random variables from one of the common distributions. We propose a research plan to investigate, both empirically and theoretically, the applicability of both a variety of models of parallel computing, and models for the division of jobs into tasks, on real MapReduce systems. We propose to use the results of our investigation to prototype new MapReduce task management and resource management schemes.