




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
1、Parallel Programmingin C with MPI and OpenMPMichael J. QuinnChapter 3Parallel Algorithm DesignOutlineTask/channel modelAlgorithm design methodologyCase studiesTask/Channel ModelParallel computation = set of tasksTaskProgramLocal memoryCollection of I/O portsTasks interact by sending messages through
2、 channelsTask/Channel ModelTaskChannelFosters Design MethodologyPartitioningCommunicationAgglomerationMappingFosters MethodologyPartitioningDividing computation and data into piecesDomain decompositionDivide data into piecesDetermine how to associate computations with the dataFunctional decompositio
3、nDivide computation into piecesDetermine how to associate data with the computationsExample Domain DecompositionsExample Functional DecompositionPartitioning ChecklistAt least 10 x more primitive tasks than processors in target computerMinimize redundant computations and redundant data storagePrimit
4、ive tasks roughly the same sizeNumber of tasks an increasing function of problem sizeCommunicationDetermine values passed among tasksLocal communicationTask needs values from a small number of other tasksCreate channels illustrating data flowGlobal communicationSignificant number of tasks contribute
5、 data to perform a computationDont create channels for them early in designCommunication ChecklistCommunication operations balanced among tasksEach task communicates with only small group of neighborsTasks can perform communications concurrentlyTask can perform computations concurrentlyAgglomeration
6、Grouping tasks into larger tasksGoalsImprove performanceMaintain scalability of programSimplify programmingIn MPI programming, goal often to create one agglomerated task per processorAgglomeration Can Improve PerformanceEliminate communication between primitive tasks agglomerated into consolidated t
7、askCombine groups of sending and receiving tasksAgglomeration ChecklistLocality of parallel algorithm has increasedReplicated computations take less time than communications they replaceData replication doesnt affect scalabilityAgglomerated tasks have similar computational and communications costsNu
8、mber of tasks increases with problem sizeNumber of tasks suitable for likely target systemsTradeoff between agglomeration and code modifications costs is reasonableMappingProcess of assigning tasks to processorsCentralized multiprocessor: mapping done by operating systemDistributed memory system: ma
9、pping done by userConflicting goals of mappingMaximize processor utilizationMinimize interprocessor communicationMapping ExampleOptimal MappingFinding optimal mapping is NP-hardMust rely on heuristicsMapping Decision TreeStatic number of tasksStructured communicationConstant computation time per tas
10、kAgglomerate tasks to minimize commCreate one task per processorVariable computation time per taskCyclically map tasks to processorsUnstructured communicationUse a static load balancing algorithmDynamic number of tasksMapping StrategyStatic number of tasksDynamic number of tasksFrequent communicatio
11、ns between tasksUse a dynamic load balancing algorithmMany short-lived tasksUse a run-time task-scheduling algorithmMapping ChecklistConsidered designs based on one task per processor and multiple tasks per processorEvaluated static and dynamic task allocationIf dynamic task allocation chosen, task
12、allocator is not a bottleneck to performanceIf static task allocation chosen, ratio of tasks to processors is at least 10:1Case StudiesBoundary value problemFinding the maximumThe n-body problemAdding data inputBoundary Value ProblemIce waterRodInsulationRod Cools as Time ProgressesFinite Difference
13、 ApproximationPartitioningOne data item per grid pointAssociate one primitive task with each grid pointTwo-dimensional domain decompositionCommunicationIdentify communication pattern between primitive tasksEach interior primitive task has three incoming and three outgoing channelsAgglomeration and M
14、appingAgglomerationSequential execution time time to update elementn number of elementsm number of iterationsSequential execution time: m (n-1) Parallel Execution Timep number of processors message latencyParallel execution time m(n-1)/p+2)Finding the Maximum Error6.25%ReductionGiven associative ope
15、rator a0 a1 a2 an-1ExamplesAddMultiplyAnd, OrMaximum, MinimumParallel Reduction EvolutionParallel Reduction EvolutionParallel Reduction EvolutionBinomial TreesSubgraph of hypercubeFinding Global Sum4207-35-6-38123-446-1Finding Global Sum17-644582Finding Global Sum8-2910Finding Global Sum178Finding G
16、lobal Sum25Binomial TreeAgglomerationAgglomerationsumsumsumsumThe n-body ProblemThe n-body ProblemPartitioningDomain partitioningAssume one task per particleTask has particles position, velocity vectorIterationGet positions of all other particlesCompute new position, velocityGatherAll-gatherComplete Graph for All-gatherHypercube for All-gatherCommunication TimeHypercubeComplete graphAdding Data InputScatterScatter in log p Steps1234567856781234 56 12 7834Summary: Task/channel ModelParallel computationSet of tasksInteractions through channelsGood designsMaximize local
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 公司稅收大數據管理制度
- 公司紅白理事會管理制度
- 公司綜合辦公室管理制度
- 公司職位與薪酬管理制度
- 公司船舶防污染管理制度
- 公司行政小零食管理制度
- 公司計算機維護管理制度
- 公司財務基本戶管理制度
- 公司質量部獨立管理制度
- 養殖機械用戶設備管理制度
- 炸雞店的產品創新與口味調研
- 2025年共享辦公空間增值服務運營模式創新與產業鏈創新模式報告
- 電氣控制柜面試題及答案
- 藥房藥品追溯管理制度
- 陜西省銅川市2025年八下英語期末監測試題含答案
- 缺血性卒中腦保護中國專家共識(2025)解讀
- 2025年福建省廈門市中考物理模擬試卷
- 海洋垃圾資源化利用與環境影響評估-洞察闡釋
- IEC60335-1中文版本大全
- 代謝相關脂肪性肝病防治指南2024年版解讀
- 物業管理定價策略與實施路徑
評論
0/150
提交評論