اÙ�حاسباØ&

اÙ�حاسباØ& - ‫1‬...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ‫1‬ ‫‪pbíìna‬‬ ‫‪٣ ................................................................................................................................................. ò߆Խa‬‬ ‫ﺍﻟﻔﺼﻞ ﺍﻷﻭﻝ: ﺍﻟﺘﻮﺍﺯﻱ ............................................................................................................................. ٦‬ ‫1.1 ﻣﻔﻬﻮم اﻟﺘﻮازي .......................................................................................................................... ٦‬ ‫2.1 اﻟﺤﺎﺟﺔ إﻟﻰ اﺳﺘﺨﺪام اﻟﺘﻮازي ....................................................................................................... ٨‬ ‫3.1 ﻓﻮاﺋﺪ ﺕﻌﺪد اﻟﻤﻌﺎﻟﺠﺎت ...................................................................................................................... ٩‬ ‫4.1 دراﺳﺔ اﻟﻤﻌﺎﻟﺠﺔ اﻟﻤﺘﻮازﻳﺔ ........................................................................................................ ٠١‬ ‫5.1 ﺕﻄﺒﻴﻘﺎت اﻟﻤﻌﺎﻟﺠﺔ اﻟﻤﺘﻮازﻳﺔ ...................................................................................................... ١١‬ ‫6.1 ﺕﻌﺮﻳﻒ اﻟﺤﺎﺳﺐ اﻟﻤﺘﻮازي ......................................................................................................... ٢١‬ ‫7.1 اﻟﺘﺴﺮﻳﻊ )‪١٢ .................................................................................................................. (Speedup‬‬ ‫8.1 أﺷﻜﺎل ﻣﻌﺎﻟﺠﺔ اﻟﻤﻌﻄﻴﺎت ﻋﻠﻰ اﻟﺘﻮازي ........................................................................................ ٧١‬ ‫1.8.1 ﻣﺴﺘﻮى اﻟﺒﺮاﻣﺞ )‪١٧ ............................................................................................. (Programs‬‬ ‫2.8.1 ﻣﺴﺘﻮى اﻹﺟﺮاﺋﻴﺔ )‪١٨ .......................................................................................... (Procedure‬‬ ‫3.8.1 ﻣﺴﺘﻮى اﻟﺘﻌﻠﻴﻤﺎت )‪١٩ ...................................................................................... (Instructions‬‬ ‫4.8.1 ﻣﺴﺘﻮى اﻟﺘﻌﻠﻴﻤﺔ )‪٢٠ .......................................................................................... (Instruction‬‬ ‫9.1 ﻣﻮﺟﺰ ﻟﺘﺎرﻳﺦ اﻟﺤﺎﺳﺒﺎت ............................................................................................................ ٢٢‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻧﻲ: ﺗﺼﻨﻴﻒ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ .............................................................................................. ٨٢‬ ‫1.2 ﺕﺼﻨﻴﻒ ﻓﻼﻳﻦ ]‪٢٨ .................................................................... [Flynn’s Classification Scheme‬‬ ‫1.1.2اﻟﺤﺎﺱﺒﺎت وﺣﻴﺪة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت ووﺣﻴﺪة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت‪٢٩ ......................................................... SISD‬‬ ‫2.1.2اﻟﺤﺎﺳﺒﺎت وﺡﻴﺪة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت وﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪٣٠ ....................................................... SIMD‬‬ ‫3.1.2اﻟﺤﺎﺳﺒﺎت ﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت ووﺡﻴﺪة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪٣٤ ....................................................... MISD‬‬ ‫4.1.2اﻟﺤﺎﺳﺒﺎت ﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت وﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪٣٥ .................................................... MIMD‬‬ ‫‪2.1.4-a‬اﻟﺬاآﺮة اﻟﻤﺸﺘﺮآﺔ ‪٣٧ ................................................................. MIMD Shared Memory‬‬ ‫‪2.1.4-b‬ﺕﻤﺮﻳﺮ اﻟﺮﺳﺎﺋﻞ ‪٤١ ................................................................... MIMD Message Passing‬‬ ‫2.2ﺷﺒﻜﺎت اﻟﺮﺏﻂ )‪٤٤ ......................................................................... (Interconnection Networks‬‬ ‫1.2.2اﻟﺸﺒﻜﺎت اﻟﺴﻜﻮﻥﻴﺔ ................................................................................................................... ٥٤‬ ‫1.1.2.2اﻟﺸﺒﻜﺔ اﻟﺨﻄﻴﺔ واﻟﺤﻠﻘﻴﺔ ..................................................................................................... ٦٤‬ ‫2.1.2.2اﻟﺸﺒﻜﺔ اﻟﻤﺼﻔﻮﻓﻴﺔ و اﻟﻤﺼﻔﻮﻓﻴﺔ اﻟﺤﻠﻘﻴﺔ ................................................................................ ٦٤‬ ‫3.1.2.2 اﻟﺸﺒﻜﺎت اﻟﺸﺠﺮﻳﺔ .......................................................................................................... ٧٤‬ ‫4.1.2.2 اﻟﺸﺒﻜﺎت اﻟﻤﻜﻌﺒﻴﺔ ........................................................................................................... ٨٤‬ ‫2.2.2 اﻟﺸﺒﻜﺎت اﻟﺪﻳﻨﺎﻣﻴﻜﻴﺔ ............................................................................................................... ٩٤‬ ‫1.2.2.2 ﺷﺒﻜﺔ اﻟﻨﺎﻗﻞ .................................................................................................................. ٩٤‬ ‫2.2.2.2ﻣﺼﻔﻮﻓﺔ اﻟﻤﺒﺪﻻت ............................................................................................................ ٠٥‬ ‫3.2.2.2 اﻟﺸﺒﻜﺎت ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت .................................................................................................. ١٥‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻟﺚ:ﻣﺒﺎﺩﺉ ﺗﺼﻤﻴﻢ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ............................................................................... ٣٥‬ ‫1.3 ﻣﻔﺎهﻴﻢ أﺳﺎﺳﻴﺔ ......................................................................................................................... ٤٥‬ ‫2.3 اﻹﺟﺮاﺋﻴﺎت واﻟﻤﻘﺎﺏﻠﺔ ................................................................................................................. ٣٦‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫2‬ ‫3.3 ﺕﻘﻨﻴﺎت اﻟﺘﻘﺴﻴﻢ ......................................................................................................................... ٦٦‬ ‫1.3.3 اﻟﺘﻘﺴﻴﻢ اﻟ َﻮدي )‪٦٧ ..................................................................... (Recursive Decomposition‬‬ ‫ﻌِ‬ ‫2.3.3 ﺕﻘﺴﻴﻢ اﻟﺒﻴﺎﻥﺎت )‪٧٠ ............................................................................... (Data Decomposition‬‬ ‫3.3.3 @اﻟﺘﻘﺴﻴﻢ اﻻﺳﺘﻜﺸﺎﻓﻲ )‪٧٧ ............................................................ (Exploratory Decomposition‬‬ ‫4.3.3 اﻟﺘﻘﺴﻴﻢ اﻟﺘﺨﻤﻴﻨﻲ )‪٨٢ ..................................................................(Speculative Decomposition‬‬ ‫5.3.3 اﻟﺘﻘﺴﻴﻢ اﻟﻤﺨﺘﻠﻂ )‪٨٥ ....................................................................... (Hybrid Decompositions‬‬ ‫4.3 أﻣﺜﻠﺔ ﻟﻠﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ ..................................................................................................... ٧٨‬ ‫1.4.3 ﺥﻮارزﻣﻴﺔ اﻟﻔﺮز اﻟﻔﻘﺎﻋﻲ وﺕﻮاﺏﻌﻬﺎ )‪٨٧ ...................................................................... .(Bubble Sort‬‬ ‫1.1.4.3 اﻹﺏﺪال اﻟﺰوﺟﻲ-اﻟﻔﺮدي )‪٨٨ ...................................................... (Odd-Even Transposition‬‬ ‫2.4.3 ﺥﻮارزﻣﻴﺔ ﺏﺮﻳﻢ ‪ Prim‬ﻹﻳﺠﺎد أﺹﻐﺮ ﺷﺠﺮة هﻴﻜﻠﻴﺔ.......................................................................... ٢٩‬ ‫1.2.4.3 ﺕﻌﺎرﻳﻒ وﻣﻔﺎهﻴﻢ أﺳﺎﺳﻴﺔ ................................................................................................... ٢٩‬ ‫2.2.4.3 اﻟﺸﺠﺮة اﻟﻬﻴﻜﻠﻴﺔ اﻷﺹﻐﺮ: )ﺥﻮارزﻣﻴﺔ ﺏﺮﻳﻢ( ........................................................................... ٥٩‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺮﺍﺑﻊ:ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ............................................................................................................ ١٠١‬ ‫1.4 ﻟﻐﺔ ‪١٠٢ ................................................................................................................... :OCCAM‬‬ ‫2.4 ﻟﻐﺔ 09 -‪١٠٩ ...................................................................................................... : FORTRAN‬‬ ‫3.4 واﺟﻬﺔ ﺕﻤﺮﻳﺮ اﻟﺮﺳﺎﺋﻞ ‪١١٠ ........................................................Message Passing Interface MPI‬‬ ‫1.3.4 اﻟﻬﻴﻜﻞ اﻟﻌﺎم ﻟﺒﺮاﻣﺞ ‪١١٠ ...................................................................................................... MPI‬‬ ‫2.3.4 اﻟﻤﺮا ِﻼت )‪١١٣ .......................................................................................... (Communicators‬‬ ‫ﺳ‬ ‫3.3.4 اﻟﺤﺼﻮل ﻋﻠﻰ ﻣﻌﻠﻮﻣﺎت ﻋﻦ ﺏﻴﺌﺔ اﻟﺘﺸﻐﻴﻞ .................................................................................... ٤١١‬ ‫4.3.4 ﺕﺮاﺳﻞ اﻟﺒﻴﺎﻥﺎت ﻓﻲ ‪١١٦ ....................................................................................................... MPI‬‬ ‫5.3.4 ﺏﺮاﻣﺞ ﺕﻄﺒﻴﻘﻴﺔ ﺏﺎﺳﺘﺨﺪام ‪١٢٠ ............................................................................................... MPI‬‬ ‫ﺏﺮﻥﺎﻣﺞ ﻹرﺳﺎل واﺳﺘﻘﺒﺎل اﻟﻤﻌﻄﻴﺎت ................................................................................................ ١٢١‬ ‫ﺏﺮﻥﺎﻣﺞ إرﺳﺎل اﻟﻤﻌﻄﻴﺎت ﺿﻤﻦ ﺡﻠﻘﺔ )‪١٢٢ ................................................................................... (Ring‬‬ ‫ﺏﺮﻥﺎﻣﺞ ﺟﻤﻊ ﺳﻠﺴﻠﺔ أﻋﺪاد ........................................................................................................... ٤٢١‬ ‫ﺏﺮﻥﺎﻣﺞ اﻟﻔﺮز اﻟﺰوﺟﻲ-اﻟﻔﺮدي ...................................................................................................... ٧٢١‬ ‫‪١٣٢ ........................................................................................................................................... ÉuaŠ½a@áçc‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫3‬ ‫‪ò߆Խa‬‬ ‫ﺍﳊﻤﺪ ﷲ ﺭﺏ ﺍﻟﻌﺎﳌﲔ، ﻭﺍﻟﺼﻼﺓ ﻭﺍﻟﺴﻼﻡ ﻋﻠﻰ ﺳﻴﺪ ﺍﳌﺮﺳﻠﲔ، ﻧﺒﻴﻨﺎ ﳏﻤـﺪ، ﻭﻋﻠـﻰ ﺁﻟـﻪ‬ ‫ﻭﺻﺤﺒﻪ ﺃﲨﻌﲔ... ﺃﻣﺎ ﺑﻌﺪ:‬ ‫ﻓﻤﻨﺬ ﺃﻥ ﺑﺰﻍ ﻋﻠﻢ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻵﻟﻴﺔ ﺇﱃ ﺍﻟﻮﺟﻮﺩ ﻭﺍﻟﻌﻠﻤﺎﺀ ﻳﺒﺬﻟﻮﻥ ﺟﻬـﺪﻫﻢ ﺳـﻌﻴﺎ ﳉﻌـﻞ‬ ‫ﹰ‬ ‫ﹸ‬ ‫ﺍﳊﺎﺳﺒﺎﺕ ﲢﻞ ﺍﳌﺴﺎﺋﻞ ﺑﺸﻜﻞ ﺃﻓﻀﻞ ﻭ ﺃﺳﺮﻉ، ﻭﻗﺪ ﺃﲦﺮﺕ ﺍﻟﺘﻘﻨﻴﺔ ﲢﺴﻨﺎ ﰲ ﺍﻟﺪﻭﺍﺋﺮ ﺍﻟﻜﻬﺮﺑﺎﺋﻴﺔ،‬ ‫ﹰ‬ ‫ﻭﺃﺻﺒﺢ ﺑﺎﻹﻣﻜﺎﻥ ﻭﺿﻊ ﺍﻟﻌﺪﻳﺪ ﻣﻨﻬﺎ ﻋﻠﻰ ﺷﺮﳛﺔ ﻭﺍﺣﺪﺓ، ﻛﺬﻟﻚ ﺍﺯﺩﺍﺩﺕ ﺳﺮﻋﺔ ﻧﺒﻀﺔ ﺍﻟﺴﺎﻋﺔ‬ ‫ﻟﻠﺠﻬﺎﺯ ﳑﺎ ﺃﺩﻯ ﺇﱃ ﻭﺻﻝ ﺳﺮﻋﺔ ﺍﳌﻌﺎﳉﺎﺕ ﺇﱃ ﺣﺪﻭﺩ ﺳﺮﻋﺎﺕ ﻋﺎﻟﻴﺔ ﺗﻘﺎﺱ ﺑﺎﳉﻴﺠﺎ ﻫﲑﺗﺰ!.‬ ‫ﻭﻣﻊ ﺫﻟﻚ ﻓﻬﻨﺎﻙ ﻗﻴﻮﺩ ﻃﺒﻴﻌﻴﺔ ﺗﺘﺤﻜﻢ ﺑﺎﳌﺪﻯ ﺍﻟﺬﻱ ﳝﻜﻦ ﻓﻴﻪ ﲢـﺴﲔ ﺍﻷﺩﺍﺀ ﳌﻌـﺎﰿ ﻭﺍﺣـﺪ،‬ ‫ﻓﺎﳊﺮﺍﺭﺓ ﻣﺜﻼ ﺃﻭ ﺍﻟﺘﺸﻮﻳﺶ ﺍﻟﻜﻬﺮﻭﻣﻐﻨﺎﻃﻴﺴﻲ ﺗﻘﻠﻼﻥ ﻣﻦ ﻛﺜﺎﻓﺔ ﺍﻟﺘﺮﺍﻧﺰﺳﺘﻮﺭﺍﺕ ﻋﻠﻰ ﺍﻟـﺸﺮﳛﺔ،‬ ‫ﹰ‬ ‫ﻭﺣﱴ ﻟﻮ ﺗﻮﺻﻞ ﺍﻟﺼﻨﺎﻉ ﳊﻞ ﻫﺬﻩ ﺍﳌﺸﺎﻛﻞ ﻓﺈﻥ ﺳﺮﻋﺔ ﺍﳌﻌﺎﰿ ﻻ ﳝﻜﻦ ﺃﺑﺪﹰﺍ ﺃﻥ ﺗﺘﺠﺎﻭﺯ ﺳـﺮﻋﺔ‬ ‫ﺍﻟﻀﻮﺀ. ﻭﻋﻼﻭﺓ ﻋﻠﻰ ﻫﺬﻩ ﺍﻟﻘﻴﻮﺩ ﺍﻟﻄﺒﻴﻌﻴﺔ ﻓﺜﻤﺔ ﻗﻴﻮﺩ ﺍﻗﺘﺼﺎﺩﻳﺔ، ﻓﻔﻲ ﻭﻗﺖ ﻣﺎ ﺳﺘﺘﺰﺍﻳﺪ ﻛﻠﻔـﺔ‬ ‫ّ‬ ‫ﺇﻧﺘﺎﺝ ﺍﳌﻌﺎﰿ ﺍﻟﺴﺮﻳﻊ ﺟﺪﹰﺍ ﺑﺸﻜﻞ ﻛﺒﲑ ﳑﺎ ﻗﺪ ﻳﺆﺩﻱ ﺇﱃ ﻋﺪﻡ ﺍﻟﺮﻏﺒﺔ ﺑﺘﺤﻤـﻞ ﻫـﺬﻩ ﺍﻟﻜﻠﻔـﺔ‬ ‫ﺍﻟﺰﺍﺋﺪﺓ. ﻛﻞ ﻫﺬﻩ ﺍﻷﺳﺒﺎﺏ ﺍﻟﱵ ﺫﻛﺮﻧﺎﻫﺎ ﺳﺘﺆﺩﻱ ﰲ ﺍﻟﻨﻬﺎﻳﺔ ﺇﱃ ﺗﺮﻙ ﲨﻴﻊ ﺍﻟﻄﺮﻕ ﺍﻟﻐﲑ ﳎﺪﻳﺔ‬ ‫ﻭ ﺗﺮﻛﻴﺰ ﺍﻻﻫﺘﻤﺎﻡ ﻋﻠﻰ ﻃﺮﻳﻘﺔ ﻭﺍﺣﺪﺓ ﻭﻫﻲ ﺗﻮﺯﻳﻊ ﲪﻞ ﺃﺩﺍﺀ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊـﺴﺎﺑﻴﺔ ﺑـﲔ ﻋـﺪﺓ‬ ‫ﻣﻌﺎﳉﺎﺕ ﺃﻭ ﻣﺎ ﻳﻌﺮﻑ ﺑـ " ﺍﻟﺘﻮﺍﺯﻱ ".‬ ‫ﻭﻟﻠﺪﻻﻟﺔ ﻋﻠﻰ ﺃﳘﻴﺔ ﺍﻟﺘﻮﺍﺯﻱ ﺍﳌﺘﺰﺍﻳﺪﺓ ﻳﻮﻣﺎ ﺑﻌﺪ ﻳﻮﻡ ﻓﺎﳊﺎﺳﺒﺎﺕ ﺍﻟﺸﺨﺼﻴﺔ ﺍﳊﺪﻳﺜـﺔ ﺑـﺪﺃﺕ‬ ‫ﹰ‬ ‫ﻣﺆﺧﺮﹰﺍ ﺑﺎﻻﺳﺘﻔﺎﺩﺓ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﺑﺸﻜﻞ ﻋﻤﻠﻲ، ﻓﻤﺜﻼ ﳝﻜﻦ ﰲ ﺍﻟﻮﻗﺖ ﺍﳊﺎﱄ ﻷﻱ ﺷـﺨﺺ ﺃﻥ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫4‬ ‫ﳝﺘﻠﻚ ﺣﺎﺳﺒﺎ ﺷﺨﺼﻴﺎ ﺫﺍ ﻣﻌﺎﳉﲔ ﻳﻌﻤﻼﻥ ﺑﺎﻟﺘﻮﺍﺯﻱ؛ ﻓﺸﺮﻛﺔ ﺃﻧﺘﻞ ﻗﺪ ﻃﻮﺭﺕ ﻟﻮﺣﺎﺕ ﺭﺋﻴﺴﻴﺔ‬ ‫ﹶ‬ ‫ﹰ‬ ‫ﳝﻜﻦ ﺃﻥ ﻳﻮﺿﻊ ﻋﻠﻴﻬﺎ ﻣﻌﺎﳉﺎﻥ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳊﺪﻳﺜﺔ.‬ ‫ﺇﻧﻪ ﻋﻠﻰ ﺃﳘﻴﺔ ﻣﻮﺿﻮﻉ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺷﺪﺓ ﺍﳊﺎﺟﺔ ﺇﻟﻴﻪ ﰲ ﺍﻟﻮﻗﺖ ﺍﻟﺮﺍﻫﻦ ﺇﻻ ﺃﻧﻚ ﻻ ﺗﻜﺎﺩ ﲡﺪ‬ ‫ﻣﺮﺍﺟﻊ ﻋﺮﺑﻴﺔ ﺗﺘﺤﺪﺙ ﻋﻦ ﺍﳌﻮﺿﻮﻉ ﺃﻭ ﺗﻌﻄﻲ ﶈﺔ ﻋﻨﻪ؛ ﺍﻷﻣﺮ ﺍﻟﺬﻱ ﺩﻓﻌﻨﺎ ﻟﻼﻋﺘﻤﺎﺩ ﻋﻠﻰ ﻣﺮﺍﺟﻊ‬ ‫ﺃﺟﻨﺒﻴﺔ، ﳑﺎ ﺳﺒﺐ ﻟﻨﺎ ﺻﻌﻮﺑﺎﺕ ﺑﺎﻟﻐﺔ ﻣﻦ ﻧﺎﺣﻴﺔ ﺗﻌﺮﻳﺐ ﺍﳌﺼﻄﻠﺤﺎﺕ ﺍﳌﺘﻌﻠﻘﺔ ﺑﺎﳌﻮﺿﻮﻉ.‬ ‫ﻳﺘﻜﻮﻥ ﺍﻟﻌﻤﻞ ﻣﻦ ﻣﻘﺪﻣﺔ ﻭﺃﺭﺑﻌﺔ ﻓﺼﻮﻝ ﻭﺛﺒﺖ ﺑﺎﳌﺮﺍﺟﻊ ﺍﳌﺴﺘﺨﺪﻣﺔ ﰲ ﺍﻟﻌﻤﻞ، ﺍﺣﺘـﻮﻯ‬ ‫ﺍﻟﻔﺼﻞ ﺍﻷﻭﻝ ﻓﻴﻪ ﻋﻠﻰ ﻋﺪﺩ ﻣﻦ ﺍﳌﻔﺎﻫﻴﻢ ﺍﻷﺳﺎﺳﻴﺔ ﺍﳋﺎﺻﺔ ﺑﺎﻟﺘﻮﺍﺯﻱ ﻭ ﲤﺖ ﺩﺭﺍﺳﺔ ﺍﻷﺷـﻜﺎﻝ‬ ‫ﺍﳌﺨﺘﻠﻔﺔ ﳌﻌﺎﳉﺔ ﺍﳌﻌﻄﻴﺎﺕ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺇﳚﺎﺯ ﻟﺘﺎﺭﻳﺦ ﺍﳊﺎﺳـﺒﺎﺕ ﺑـﺸﻜﻞ ﻋـﺎﻡ‬ ‫ﻭﻇﻬﻮﺭ ﺍﻟﺘﻮﺍﺯﻱ ﺧﻼﻝ ﻫﺬﻩ ﺍﻟﻔﺘﺮﺍﺕ ﺍﻟﺘﺎﺭﳜﻴﺔ.‬ ‫ﻭﻗﺪ ﺗﻀﻤﻦ ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﱐ ﺑﺸﻜﻞ ﻭﺍﺳﻊ ﺗﺼﻨﻴﻒ ﻓﻼﻳـﻦ]‪ [Flynn‬ﻟﻠﺤﺎﺳـﺒﺎﺕ ﺍﳌﺘﻮﺍﻳـﺔ‬ ‫ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺷﺒﻜﺎﺕ ﺍﻟﺮﺑﻂ ﻭﺃﻧﻮﺍﻋﻬﺎ ﺍﳌﺴﺘﺨﺪﻣﺔ ﰲ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻭﻣﺒﺪﺃ ﻋﻤﻠﻬﺎ.‬ ‫ﺃﻣﺎ ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻟﺚ ﻓﻘﺪ ﻛﺮﺱ ﻟﺪﺭﺍﺳﺔ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻭ ﺍﻟﺘﻘﻨﻴـﺎﺕ ﺍﳌـﺴﺘﺨﺪﻣﺔ ﰲ‬ ‫ﹸّ‬ ‫ﻛﺘﺎﺑﺔ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻭﻗﺪ ﺷﺮﺣﺖ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻷﻣﺜﻠﺔ ﺍﳌﺒﺎﺷﺮﺓ ﻭﺍﻟﺸﻬﲑﺓ ﰲ ﺍﳊﺎﻟﺔ ﺍﻟﺘﺴﻠـﺴﻠﻴﺔ ﻭ‬ ‫ُ‬ ‫ﻛﻴﻔﻴﺔ ﺍﺳﺘﺨﺪﺍﻡ ﺗﻠﻚ ﺍﻟﺘﻘﻨﻴﺎﺕ ﳉﻌﻠﻬﺎ ﺧﻮﺍﺭﺯﻣﻴﺎﺕ ﻣﺘﻮﺍﺯﻳﺔ.‬ ‫ﻭﻋﺎﰿ ﺍﻟﻔﺼﻞ ﺍﻟﺮﺍﺑﻊ ﻣﺴﺄﻟﺔ ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻭ ﳑﻴﺰﺍﻬﺗﺎ ﻭﻗﺪ ﻧﺎﻗﺶ ﻋﻠﻰ ﻭﺟﻪ ﺍﻟﺘﺤﺪﻳﺪ ﺍﻟﱪﳎﺔ‬ ‫ﺑﻠﻐﺔ ‪ OCCAM‬ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻭﻛﺬﻟﻚ ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺑﻠﻐﺔ 09-‪ ،FORTRAN‬ﻭﻗﺪ ﰎ ﺑﺘﻮﺳـﻊ‬ ‫ﺩﺭﺍﺳﺔ ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻦ ﺧﻼﻝ ﻟﻐﺔ ++‪ C‬ﺑﺎﺳﺘﺨﺪﺍﻡ ﻣﻜﺘﺒﺔ ‪ MPI‬ﻭﻛﻴﻔﻴﺔ ﻛﺘﺎﺑـﺔ ﺑﺮﻧـﺎﻣﺞ‬ ‫ﻣﺘﻮﺍﺯﻱ ﻬﺑﺬﻩ ﺍﻟﻠﻐﺔ ﻭﰎ ﺍﻟﻘﻴﺎﻡ ﺑﻜﺘﺎﺑﺔ ﻋﺪﺓ ﺑﺮﺍﻣﺞ ﺗﻄﺒﻴﻘﻴﺔ ﻣﺘﻮﺍﺯﻳﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻟﻐﺔ ++‪ C‬ﻭ ﻣﻜﺘﺒﺔ‬ ‫‪ MPI‬ﻟﻌﺪﺩ ﻣﻦ ﺍﳌﺴﺎﺋﻞ ﺍﳌﻌﺮﻭﻓﺔ ﰲ ﺍﳊﺎﻟﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫5‬ ‫‪< <Å憹]<Í‚â‬‬ ‫ﺇﻥ ﺍﳍﺪﻑ ﻣﻦ ﻫﺬﺍ ﺍﳌﺸﺮﻭﻉ )ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻭﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ( ﻫﻮ ﺍﻟﺘﻌﺮﻳـﻒ‬ ‫ﲟﻔﻬﻮﻡ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻦ ﺣﻴﺚ ﺍﻟﻨﺸﺄﺓ ﻭﻣﺒﺪﺇ ﺍﻟﻌﻤﻞ، ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺍﻟﺘﻌ ّﻑ ﻋﻠﻰ‬ ‫ﺮ‬ ‫ﻃﺒﻴﻌﺔ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺴﺘﺨﺪﻣﺔ ﳊﻞ ﺍﳌﺴﺎﺋﻞ ﺍﳌﺘﻌﻠﻘﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ ﻣﻘﺎﺭﻧﺔ ﻣـﻊ‬ ‫ﺍﻟﺘﺴﻠﺴﻠﻲ، ﻭﻛﺘﺎﺑﺔ ﺑﺮﺍﻣﺞ ﻣﺘﻮﺍﺯﻳﺔ ﻟﻌﺪﺩ ﻣﻦ ﺍﳌﺴﺎﺋﻞ ﺍﳌﻌﺮﻭﻓﺔ.‬ ‫ﻭﻧﺄﻣﻞ ﻣﻦ ﺍﷲ -ﺗﻌﺎﱃ- ﺃﻥ ﻧﻜﻮﻥ ﻗﺪ ﻭﻓﻘﻨﺎ ﰲ ﻣﺴﻌﺎﻧﺎ.‬ ‫ﺍﻟﻄﻼﺏ:‬ ‫ﳏﻤﺪ ﺑﻦ ﻋﺒﺪﺍﷲ ﺍﳉﺎﺭﺍﷲ‬ ‫ﻧ ّﺍﻑ ﺑﻦ ﻣﻘﺒﻞ ﺍﳊﺮﰊ‬ ‫ﻮ‬ ‫ﺑ ّﺎﻡ ﺑﻦ ﻋﺒﺪﺍﻟﺮﲪﻦ ﺍﳋﺮﱢﻳﻒ‬ ‫ﺴ‬ ‫ﻋﻤﺮ ﺑﻦ ﺻﺎﱀ ﺍﻟﺒﻬﺪﻝ‬ ‫َ‬ ‫١٢-ﺫﻭ ﺍﻟﻘﻌﺪﺓ-٤٢٤١ ﻫـ‬ ‫‪Parallel_computers@yahoo.com‬‬ ‫آﺎﻓﺔ اﻟﺤﻘﻮق ﻣﺤﻔﻮﻇﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫6‬ ‫ﺍﻟﻔﺼﻞ ﺍﻷﻭﻝ: ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﻟﻌﺪﺓ ﺳﻨﻮﺍﺕ ﺧﻠﺖ، ﻛﺎﻧﺖ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻻ ﺗﻮﺟﺪ ﺇﻻ ﰲ ﻣﻌﺎﻣﻞ ﺍﻷﲝﺎﺙ. ﺃﻣﺎ ﺍﻟﻴﻮﻡ‬ ‫ﻓﻬﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻣﺘﻮﻓﺮﺓ ﻭﻋﻠﻰ ﻧﻄﺎﻕ ﻭﺍﺳﻊ ﰲ ﺍﺠﻤﻟﺎﻻﺕ ﺍﻟﺘﺠﺎﺭﻳﺔ. ﺇﻥ ﳎﺎﻝ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻗﺪ‬ ‫ﻧﻀﺞ ﺇﱃ ﺍﳊﺪ ﺍﻟﺬﻱ ﺃﺻﺒﺢ ﻳﺪﺭﺱ ﰲ ﺍﳌﺮﺍﺣﻞ ﺍﳉﺎﻣﻌﻴﺔ ﺍﻷﻭﱃ. ﻭﲡﺪﺭ ﺍﻹﺷﺎﺭﺓ ﺇﱃ ﺃﻥ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﻳﻐﻄﻲ ﻃﻴﻔﺎ ﻭﺍﺳﻌﺎ ﻣﻦ ﺍﻷﺷﻴﺎﺀ ﺑﺪﺍﻳﺔ ﻣﻦ ﺗﺼﻤﻴﻢ ﺃﺑﺴﻂ ﻣﻜﻮﻧﺎﺕ ﺍﻟﻌﺘﺎﺩ ﻛﺎﳉﺎﻣﻊ)‪ (adder‬ﻣﺜﻼ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﻭﺣﱴ ﲢﻠﻴﻞ ﺍﻟﻨﻤﺎﺫﺝ ﺍﻟﻨﻈﺮﻳﺔ ﻟﻠﺤﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ. ﻭﰲ ﺍﳊﻘﻴﻘﺔ ﻓﺈﻥ ﺟﻮﺍﻧﺐ ﺍﳌﻌﺎﳉـﺔ ﺍﳌﺘﻮﺍﺯﻳـﺔ‬ ‫ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺩﳎﻬﺎ ﰲ ﺃﻱ ﻣﻘﺮﺭ ﻟﻌﻠﻮﻡ ﺍﳊﺎﺳﺐ، ﻣﺜﻞ ﺩﺭﺍﺳﺔ ﻋﻤﺎﺭﺓ ﺍﳊﺎﺳﺒﺎﺕ، ﺃﻭ ﺍﻟﱪﳎﺔ، ﺃﻭ‬ ‫ﺍﻟﺸﺒﻜﺎﺕ ﺃﻭ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻭﻏﲑﻫﺎ. ﻭﰲ ﻫﺬﺍ ﺍﻟﻔﺼﻞ ﻋﺮﺽ ﻷﻫﻢ ﻣﻔﺎﻫﻴﻢ ﻭﻣﺒـﺎﺩﺉ ﺍﳌﻌﺎﳉـﺔ‬ ‫ﺍﳌﺘﻮﺍﺯﻳﺔ.‬ ‫1.1 ﻣﻔﻬﻮﻡ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﻋﻨﺪﻣﺎ ﻳﺘﺤﺪﺙ ﺧﱪﺍﺀ ﺍﳊﺎﺳﺐ ﻋﻦ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻓﺈﻬﻧﻢ ﻻ ﻳﻌﻨـﻮﻥ ﻣﻌﺎﳉـﺔ ﺍﳌﻘﻮﻟـﺔ‬ ‫"ﺍﳋﻄﺎﻥ ﺍﳌﺘﻮﺍﺯﻳﺎﻥ ﺍﻟﻠﺬﺍﻥ ﻻ ﻳﻠﺘﻘﻴﺎﻥ" ﻭ ﻟﻜﻨﻬﻢ ﻳﻨﺎﻗﺸﻮﻥ ﻋﺪﺩﹰﺍ ﻣﻦ ﺍﻷﻧﺸﻄﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺍﻟﱵ ﲢﺪﺙ‬ ‫ﰲ ﻭﻗﺖ ﻭﺍﺣﺪ.‬ ‫ﺍﻟﺘﻮﺍﺯﻱ ‪ :parallelism‬هﻮ ﻣﺠﻤﻮﻋﺔ ﻣﻦ اﻷﻧﺸﻄﺔ اﻟﺘﻲ ﺕﺤﺪث ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫7‬ ‫ﺇﻥ ﻣﻔﻬﻮﻡ ﺍﻟﺘﻮﺍﺯﻱ ﺑﺸﻜﻞ ﻋﺎﻡ ﻟﻴﺲ ﺣﻜﺮﹰﺍ ﻋﻠﻰ ﻋﻠﻢ ﺍﳊﺴﺎﺏ، ﺑﻞ ﺭﲟﺎ ﺃﻧﻨﺎ ﳕﺎﺭﺱ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﰲ ﺣﻴﺎﺗﻨﺎ ﺍﻟﻴﻮﻣﻴﺔ، ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﻛﺘﺎﺑﺔ ﺍﳌﻼﺣﻈﺎﺕ ﺃﺛﻨﺎﺀ ﺍﻻﺳﺘﻤﺎﻉ ﺇﱃ ﳏﺎﺿﺮﺓ ﺗﻌﺘﱪ ﺃﻧﺸﻄﺔ‬ ‫ﻣﺘﻮﺍﺯﻳﺔ.‬ ‫ﻭﺍﳊﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ ﻳﺒﺪﻭ ﻭﻛﺄﻧﻪ ﺗﺼﻮﺭ ﻣﻌﻘﻮﻝ. ﻓﺎﻟﻌﺪﻳﺪ ﻣـﻦ ﺍﳌﻔـﺎﻫﻴﻢ ﺍﳌﻮﺟـﻮﺩﺓ ﰲ‬ ‫ﺍﳊﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ ﳍﺎ ﻣﺎ ﳝﺎﺛﻠﻬﺎ ﻣﻦ ﺍﳌﻔﺎﻫﻴﻢ ﰲ ﺍﳊﻴﺎﺓ ﺍﻻﺟﺘﻤﺎﻋﻴﺔ، ﻣﺜﻞ ﺇﺩﺍﺭﺓ ﺍﻷﻋﻤﺎﻝ. ﻓﻔﻜـﺮﺓ‬ ‫ﺃﻥ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﻨﺎﺱ ﻳﻌﻤﻠﻮﻥ ﺳﻮﻳﺎ ﻟﺘﺤﻘﻴﻖ ﻫﺪﻑ ﻭﺍﺣﺪ ﻫﻲ ﻣﺸﺎﻬﺑﺔ ﻟﻌﺪﺩ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ ﺗﻌﻤﻞ‬ ‫ﹰ‬ ‫ﺳﻮﻳﺎ ﳓﻮ ﻫﺪﻑ ﻭﺍﺣﺪ. ﻭﻓﻜﺮﺓ ﺍﳊﺎﺟﺔ ﺇﱃ ﺗﻘﺴﻴﻢ ﺍﻟﻌﻤﻞ ﻟﻜﻲ ﺗﻌﻤﻞ ﲨﻴﻊ ﺍﳌﻌﺎﳉﺎﺕ ﻭﻻ ﻳﻜﻮﻥ‬ ‫ٍ‬ ‫ﹰ‬ ‫ﻫﻨﺎﻙ ﺃﺣﺪ ﻣﺘﻮﻗﻒ ﻋﻦ ﺍﻟﻌﻤﻞ ﻫﻲ ﻓﻜﺮﺓ ﺷﺒﻴﻬﺔ ﻟﻠﺤﺎﺟﺔ ﺇﱃ ﺟﻌﻞ ﻓﺮﻳﻖ ﻳﻌﻤﻞ ﺩﻭﻥ ﺃﻥ ﻳﻨﺘﻈـﺮ‬ ‫ﺃﺣﺪ ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﻣﻌﻠﻮﻣﺎﺕ ﻣﻦ ﺷﺨﺺ ﺁﺧﺮ.‬ ‫ﻭﻣﻦ ﻫﺬﺍ ﺍﻟﺘﻤﺜﻴﻞ ﻧﺮﻯ ﻛﻴﻒ ﺃﻥ ﺍﳊﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ ﻫﻮ ﻧﺘﻴﺠـﺔ ﻃﺒﻴﻌﻴـﺔ ﳌﻔﻬـﻮﻡ ﻓـﺮﻕ-‬ ‫ﹶﱢ‬ ‫َﺗﺴﺪ)‪ . (divide-and-conquer‬ﻓﻨﺒﺪﺃ ﺃﻭﻻ ﻣﻊ ﺍﳌﺴﺄﻟﺔ ﺍﻟﱵ ﻧﺮﻳﺪ ﺣﻠﻬﺎ، ﰒ ﳓﺼﻞ ﻋﻠﻰ ﺍﳌـﻮﺍﺭﺩ‬ ‫ﹰ‬ ‫ُ‬ ‫ﺍﳌﺘﺎﺣﺔ ﺍﻟﱵ ﳝﻜﻦ ﺃﻥ ﻧﺴﺘﺨﺪﻣﻬﺎ ﳊﻞ ﺍﳌﺴﺄﻟﺔ )ﻭﰲ ﺣﺎﻟﺔ ﺍﳊﻮﺳﺒﺔ ﺳﺘﻜﻮﻥ ﻫﺬﻩ ﺍﳌﻮﺍﺭﺩ ﻋﺒﺎﺭﺓ ﻋﻦ‬ ‫ﻋﺪﺩ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ(، ﺑﻌﺪ ﺫﻟﻚ ﳓﺎﻭﻝ ﺗﻘﺴﻴﻢ ﺍﳌﺴﺄﻟﺔ ﺇﱃ ﺃﺟﺰﺍﺀ ﺃﺻﻐﺮ ﳝﻜﻦ ﺃﻥ ﺗﺆﺩﻯ ﺑـﺸﻜﻞ‬ ‫ﱠ‬ ‫ﻣﺘﺰﺍﻣﻦ ﻮﺍﺳﻄﺔ ﻋﺪﺓ ﺃﺷﺨﺎﺹ ﰲ ﺍﻟﻔﺮﻳﻖ.‬ ‫ﻭﻣﻊ ﺫﻟﻚ ﳚﺐ ﺃﻥ ﻧﻜﻮﻥ ﻋﻠﻰ ﺣﺬﺭ ﻷﻥ ﺍﻟﻌﺒﺎﺭﺓ "ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ" ﻏﲑ ﺩﻗﻴﻘﺔ، ﻋﻠﻰ ﺳﺒﻴﻞ‬ ‫ﺍﳌﺜﺎﻝ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺴﺮﻳﻌﺔ ، ﻣﺜﻞ ﺍﳊﺎﺳﺐ ‪ ،VAX‬ﻗﺪ ﻳﺒﺪﻭ ﻭﻛﺄﻬﻧﺎ ﺗﻘﻮﻡ ﺑﻌﻤﻞ ﺣﺴﺎﺑﺎﺕ ﻟﻠﻌﺪﻳﺪ‬ ‫ﻣﻦ ﺍﳌﺴﺘﺨﺪﻣﲔ ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ ﻭﺫﻟﻚ ﻷﻥ ﺍﳌﻌﺎﰿ ﻳﻨﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﺑﺴﺮﻋﺔ ﻛﺒﲑﺓ. ﻭﻫﻨﺎ ﻳﺼﺒﺢ‬ ‫ﻟﺪﻳﻨﺎ "ﻭﻫﻢ" ﺃﻭ "ﺍﳔﺪﺍﻉ" ﺑﺄﻥ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺗﻨﻔﺬ ﺑﺸﻜﻞ ﺁﱐ ﻭﻫﻮ ﻣﺎ ﻳﻄﻠـﻖ ﻋﻠﻴـﻪ "ﺍﻟﺘـﻮﺍﺯﻱ‬ ‫ﺍﻟﻮﳘﻲ". ﻭﲟﺎ ﺃﻥ ﺍﳌﻌﺎﰿ ﻳﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﳌﻬﻤﺔ ﻭﺍﺣﺪﺓ ﻓﺈﻧﻪ ﻟﻴﺲ ﺗﻮﺍﺯ ﺣﻘﻴﻘﻲ.‬ ‫ﻭﳚﺐ ﺍﻟﺘﻔﺮﻳﻖ ﺑﲔ ﻣﻔﻬﻮﻡ ﺍﻟﺘﺰﺍﻣﻦ)‪ (Concurrency‬ﻭﻣﻔﻬﻮﻡ ﺍﻟﺘﻮﺍﺯﻱ ﺍﻟﺬﻱ ﻋﺮﺿﻨﺎﻩ ﺁﻧﻔﺎ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫8‬ ‫ﺍﻟﺘﺰﺍﻣﻦ ‪ :Concurrency‬هﻮ اﻟﻘﺪرة ﻋﻠﻰ اﻟﺘﺸﻐﻴﻞ ﻓﻲ ﻧﻔﺲ اﻟﻮﻗﺖ.‬ ‫ﺇﻥ ﻫﺬﺍ ُﻳﻈﻬ ُ ﻭﻛﺄﻥ ﺍﻟﺘﺰﺍﻣﻦ ﻭ ﺍﻟﺘﻮﺍﺯﻱ ﻟﻔﻈﺎﻥ ﻣﺘﺮﺍﺩﻓﺎﻥ ﳌﻌﲎ ﻭﺍﺣﺪ. ﻭﻟﻜﻦ ﻫﻨﺎﻙ ﻓـﺮﻕ‬ ‫ِﺮ‬ ‫ﺧﻔﻲ ﺑﻴﻨﻬﻤﺎ. ﻓﻨﺤﻦ ﻧﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ ﻟﻺﺷﺎﺭﺓ ﺇﱃ ﺍﳊﺎﻻﺕ ﺍﻟﱵ ﲢﺪﺙ ﻓﻴﻬﺎ ﺍﻟﻌﻤﻠﻴﺎﺕ ﰲ ﻧﻔﺲ‬ ‫ﺍﻟﻮﻗﺖ، ﻣﺜﻞ ﺃﺭﺑﻊ ﻣﻬﺎﻡ ﻳﺘﻢ ﺗﻨﻔﻴﺬﻫﺎ ﻋﻠﻰ ﺃﺭﺑﻊ ﻣﻌﺎﳉﺎﺕ )‪ (CPUs‬ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ. ﺃﻣﺎ ﺍﻟﺘﺰﺍﻣﻦ‬ ‫ﻓﻴﺸﲑ ﺇﱃ ﻛﻼ ﺍﳊﺎﻟﺘﲔ ﺍﻷﻭﱃ ﺍﻟﱵ ﺗﻌﺘﱪ ﻣﺘﻮﺍﺯﻳﺔ ﺣﻘﺎ ﻭﺍﻷﺧﺮﻯ ﺍﻟﱵ ﻓﻴﻬﺎ "ﺗـﻮﺍﺯﻱ ﻭﳘـﻲ" ،‬ ‫ﹰ‬ ‫ﻭﻛﻤﺜﺎﻝ ﻟﻠﺘﺰﺍﻣﻦ: ﺃﺭﺑﻊ ﻣﻬﺎﻡ ﻳﻘﺴﻢ ﺑﻴﻨﻬﺎ ﺍﻟﻮﻗﺖ ﻭﺗﻨﻔﺬ ﻋﻠﻰ ﻣﻌﺎﰿ ﻭﺍﺣﺪ.‬ ‫2.1 ﺍﳊﺎﺟﺔ ﺇﱃ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﺇﻧﻪ ﻣﻦ ﺍﳌﻔﻴﺪ ﺍﻹﺟﺎﺑﺔ ﻋﻠﻰ ﺍﻟﺴﺆﺍﻝ ﺍﳌﻠﺢ ﻭﺍﳍﺎﻡ : ﳌﺎﺫﺍ ﻧﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ؟‬ ‫ﺇﻥ ﺍﻟﺴﺒﺐ ﺍﻟﺮﺋﻴﺴﻲ ﻻﺳﺘﻌﻤﺎﻝ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﺗﺼﻤﻴﻢ)ﺍﻟﱪﳎﻴﺎﺕ ﺃﻭ ﺍﻟﻌﺘﺎﺩ( ﻫﻮ ﻣـﻦ ﺃﺟـﻞ‬ ‫ﹼ‬ ‫ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﺍﻷﺩﺍﺀ ﺍﻷﻋﻠﻰ ﺃﻭ ﺍﻟﺴﺮﻋﺔ ﺍﻟﻌﺎﻟﻴﺔ. ﻭﻛﻞ ﺃﻧـﻮﺍﻉ ﺍﳊﺎﺳـﺒﺎﺕ ﺍﻟـﻀﺨﻤﺔ ﺍﻟﻴـﻮﻡ‬ ‫)‪ (Supercomputers‬ﺗﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻧﻄﺎﻕ ﻭﺍﺳﻊ ﻟﺰﻳﺎﺩﺓ ﺍﻷﺩﺍﺀ، ﻓﺄﺳﺮﻉ ﺣﺎﺳـﺐ ﰲ‬ ‫ﺍﻟﻌﺎﱂ ﰲ ﻋﺎﻡ 3002 ﻫﻮ ﺍﳊﺎﺳﺐ ﺍﻟﻴﺎﺑﺎﱐ "ﳏﺎﻛﻲ ﺍﻷﺭﺽ" ﺍﻟﺬﻱ َﻳﺴَﺘﺨﺪﻡ ﺃﻛﺜﺮ ﻣـﻦ ﲬـﺴﺔ‬ ‫ِ‬ ‫ﺁﻻﻑ ﻣﻌﺎﰿ ﺗﻌﻤﻞ ﺑﺎﻟﺘﻮﺍﺯﻱ. ﻭﻟﻘﺪ ﺯﺍﺩﺕ ﺳﺮﻋﺔ ﺍﳊﺎﺳﺒﺎﺕ ﺇﱃ ﺍﳊﺪ ﺍﻟﺬﻱ ﻭﺻﻠﺖ ﻓﻴﻪ ﺍﻟﺪﺍﺭﺍﺕ‬ ‫ﺍﳊﺎﺳﻮﺑﻴﺔ ﺣﺪﻭﺩﹰﺍ ﻓﻴﺰﻳﺎﺋﻴﺔ ﻣﺜﻞ ﺳﺮﻋﺔ ﺍﻟﻀﻮﺀ. ﻟﺬﻟﻚ ﻓﻤﻦ ﺃﺟﻞ ﲢـﺴﲔ ﺍﻷﺩﺍﺀ ﻓـﻼ ﺑـﺪ ﺃﻥ‬ ‫ﻧﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ.‬ ‫ﺇﻥ ﺍﻟﺴﺮﻋﺔ ﻟﻴﺴﺖ ﻫﻲ ﺍﻟﺴﺒﺐ ﺍﻟﻮﺣﻴﺪ ﰲ ﺍﺳﺘﻌﻤﺎﻝ ﺍﻟﺘﻮﺍﺯﻱ. ﻓﻤﺼﻤﻢ ﺍﳊﺎﺳﺐ ﳝﻜﻨـﻪ ﺃﻥ‬ ‫ﻳﻀﺎﻋﻒ ﻣﻦ ﺍﳌﻜﻮﻧﺎﺕ ﻟﻴﺰﻳﺪ ﻣﻦ ﺇﻣﻜﺎﻧﻴﺔ ﺍﻻﻋﺘﻤﺎﺩ ﻋﻠﻰ ﺍﳉﻬﺎﺯ)ﺍﻟﻮﺛﻮﻗﻴﺔ(. ﻋﻠﻰ ﺳـﺒﻴﻞ ﺍﳌﺜـﺎﻝ‬ ‫ﻧﻈﺎﻡ ﺗﻮﺟﻴﻪ ﻣﺮﻛﺒﺎﺕ ﺍﻟﻔﻀﺎﺀ ﻳﺘﻜﻮﻥ ﻣﻦ ﺛﻼﺛﺔ ﺃﺟﻬﺰﺓ ﻣﻦ ﺍﳊﻮﺍﺳﺐ ﻭﺍﻟﱵ ﺗﻘﺎﺭﻥ ﻧﺘﺎﺋﺞ ﻛـﻞ‬ ‫ﻣﻨﻬﻢ ﻣﻊ ﺍﻵﺧﺮ. ﻭﳝﻜﻦ ﻟﻠﻤﺮﻛﺒﺔ ﺃﻥ ﺗﺴﲑ ﲜﻬﺎﺯ ﻭﺍﺣﺪ ﻓﻘﻂ ﺑﻴﻨﻤﺎ ﺍﳉﻬﺎﺯﻳﻦ ﺍﻵﺧﺮﻳﻦ ﻳﻜﻮﻧـﺎﻥ‬ ‫ﰲ ﻭﺿﻊ ﺍﺣﺘﻴﺎﻃﻲ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫9‬ ‫ﻭﳝﻜﻦ ﻛﺬﻟﻚ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻟﺘﻮﺍﺯﻱ ﳉﻌﻞ ﺍﻟﺴﻴﻄﺮﺓ ﻻ ﻣﺮﻛﺰﻳﺔ. ﻓﺎﻟﺒﻨـﻚ ﻣـﺜﻼ ﳝﻜﻨـﻪ ﺃﻥ‬ ‫ﹰ‬ ‫ﻳﺴﺘﺨﺪﻡ ﺷﺒﻜﺔ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺼﻐﲑﺓ ﰲ ﺍﳌﻘﺮ ﺍﻟﺮﺋﻴﺴﻲ ﻭﺍﻟﻔﺮﻭﻉ ﺑﺪﻻ ﻣﻦ ﺍﺳﺘﺨﺪﺍﻡ ﺣﺎﺳـﺐ‬ ‫ﻭﺍﺣﺪ ﻛﺒﲑ، ﻫﺬﻩ ﺍﳌﻌﺎﳉﺔ ﺍﻟﺘﻘﺴﻴﻤﻴﺔ ﻟﻠﺤﺎﺳﺒﺎﺕ ﺗﺘﻤﻴﺰ ﺑﻮﺟﻮﺩ ﺍﻟﺘﺤﻜﻢ ﺍﻟﺪﺍﺧﻠﻲ ﻋـﻦ ﻃﺮﻳـﻖ‬ ‫ﻣﺪﻳﺮﻱ ﺍﻟﺒﻨﻚ.‬ ‫ﻳﻌﺘﱪ ﺍﻟﺘﻮﺍﺯﻱ ﳕﻮﺫﺟﺎ ﻣﻬﻤﺎ ﳊﻞ ﺍﳌﺴﺎﺋﻞ، ﻓﺎﻟﻄﺒﻴﻌﺔ ﻣﺘﻮﺍﺯﻳﺔ ﻓﺒﻴﻨﻤﺎ ﺗﺘﺤﺪﺙ ﻣﻊ ﺃﺻﺪﻗﺎﺋﻚ،‬ ‫ﹰﹰ‬ ‫ﻓﺈﻥ ﺍﻟﻘﻠﺐ ﻳﻀﺦ ﺍﻟﺪﻡ، ﻭﺍﻟﺮﺋﺘﲔ ﺗﺘﻨﻔﺲ ﺍﳍﻮﺍﺀ، ﻭﺍﻟﻌﻴﻨﺎﻥ ﺗﺘﺤﺮﻙ، ﻭﺍﻟﻠﺴﺎﻥ ﻳﺘﺤﺮﻙ ﻛﻞ ﺫﻟـﻚ‬ ‫ﳛﺪﺙ ﺑﺎﻟﺘﻮﺍﺯﻱ. ﻭﺍﻟﻜﺜﲑ ﻣﻦ ﺍﻷﺷﻴﺎﺀ ﻣﺘﻮﺍﺯﻳﺔ ﺑﻄﺒﻴﻌﺘﻬﺎ. ﻣﺜﻼ ﻻﺣﻆ ﺃﻓﻌﺎﻝ ﺣﺸﺪ ﻣﻦ ﺍﻟﻨـﺎﺱ‬ ‫ﹰ‬ ‫ﻳﻨﺘﻈﺮﻭﻥ ﺍﳌﺼﻌﺪ ﻟﻠﺼﻌﻮﺩ ﺇﱃ ﺃﻋﻠﻰ) ﺃﻧﺸﻄﺔ(، ﻭﺃﻓﺮﺍﺩ ﻣﻦ ﺍﳋﺎﺭﺝ ﻳﻘﻮﻣﻮﻥ ﺑـﻀﻐﻂ ﺍﻟـﺰﺭ ﰲ‬ ‫ﺍﻟﻄﺎﺑﻖ ﺍﻟﺬﻱ ﻫﻢ ﻣﺘﻮﺍﺟﺪﻭﻥ ﻓﻴﻪ ) ﺣﺪﺙ( ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ ﺍﻟﺬﻱ ﻳﻘﻮﻡ ﺃﻓﺮﺍﺩ ﻣﻦ ﺩﺍﺧﻞ ﺍﳌﺼﻌﺪ‬ ‫ﺑﺎﻟﻀﻐﻂ ﻋﻠﻰ ﺍﻟﺰﺭ، ﻭﻟﻜﻲ ﳒﻌﻞ ﺍﻷﺩﺍﺀ ﺃﻣﺜﻞ )ﺑﺎﺳﺘﺨﺪﺍﻡ ﺑﺮﻧﺎﻣﺞ ﺣﺎﺳﻮﰊ ﻣﺜﻼ( ﳚﺐ ﺍﻟﺘﻌﺎﻣـﻞ‬ ‫ﹰ‬ ‫ﻣﻊ ﻫﺬﻩ ﺍﻷﻧﺸﻄﺔ ﻭﺍﻷﺣﺪﺍﺙ ﺍﳌﺘﻮﺍﺯﻳﺔ.‬ ‫3.1 ﻓﻮﺍﺋﺪ ﺗﻌﺪﺩ ﺍﳌﻌﺎﳉﺎﺕ‬ ‫ﻟﺘﻌﺪﺩ ﺍﳌﻌﺎﳉﺎﺕ ﻓﻮﺍﺋﺪ ﻋﺪﺓ ﻣﻨﻬﺎ:‬ ‫• ﺗﻨﻔﻴﺬ ﺍﳌﻬﺎﻡ ﺍﳌﺴﺘﻘﻠﺔ ﲟﻌﺎﳉﺎﺕ ﻣﺘﺰﺍﻣﻨﺔ ﻭﺑﺬﻟﻚ ﺗﺰﺩﺍﺩ ﻧﺴﺒﺔ ﺍﻟﻌﻤﻞ ﻭﻋﺪﺩ ﺍﳌﺴﺘﺨﺪﻣﲔ.‬ ‫• ﺍﻹﻗﻼﻝ ﻣﻦ ﺗﻜﺎﻣﻞ ﺍﳌﻌﺎﳉﺎﺕ ﰲ ﻧﻈﺎﻡ ﻭﺣﻴﺪ ﺍﻟﻜﻠﻔﺔ ﺍﳌﺎﺩﻳﺔ ﻻﺷﺘﺮﺍﻛﻪ ﰲ ﻧﻈﺎﻡ ﳌﻮﺍﺭﺩ ﻣﺜﻞ‬ ‫ﺍﻟﺬﺍﻛﺮﺓ ﻭﺍﻷﻗﺮﺍﺹ ﻭﻭﺣﺪﺍﺕ ﺍﻟﺮﺑﻂ ﻣﻊ ﺍﻟﺸﺒﻜﺎﺕ.‬ ‫• ﻳﻘﺪﻡ ﺳﺮﻋﺔ ﻋﺎﻟﻴﺔ ﰲ ﺍﻟﺘﻮﺻﻴﻞ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﺘﻌﺪﺩﺓ ﻭﻳﻨﻔﺬ ﺃﻓﻀﻞ ﺗﻨﺎﺳﻖ ﻭﺃﺳﺮﻉ ﻭﺻﻞ‬ ‫ﺑﲔ ﺍﳌﻬﺎﻡ ﺍﳌﺮﺗﺒﻄﺔ .‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫01‬ ‫• ﳚﺰﺉ ﺍﻟﻌﻤﻞ ﺍﻟﻮﺣﻴﺪ ﺍﻟﻜﺒﲑ ﺇﱃ ﻋﺪﺓ ﻣﻬﺎﻡ ﻣﺘﺸﺎﻬﺑﺔ ﺗﻨﻔﺬ ﰲ ﻭﻗﺖ ﻭﺍﺣﺪ ﻣﻦ ﺃﺟﻞ ﺍﻟﺴﺮﻋﺔ‬ ‫ُ‬ ‫ﰲ ﺍﻟﺘﻄﺒﻴﻖ .‬ ‫4.1 ﺩﺭﺍﺳﺔ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫ﰲ ﺍﻟﺴﻨﻮﺍﺕ ﺍﻷﺧﲑﺓ ﺃﺣﺪﺛﺖ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺛﻮﺭﺓ ﻋﻠﻤﻴﺔ ﻛـﺒﲑﺓ ﰲ ﳎـﺎﻝ ﺍﳊﺎﺳـﺒﺎﺕ‬ ‫ﻭﺑﺪﺃﺕ ﺗﺪﺧﻞ ﺍﻟﻌﺎﱂ ﻛﻞ ﻳﻮﻡ ﻋﻦ ﻃﺮﻳﻖ ﻣﻌﺎﳉﺔ ﺍﻟﺒﻴﺎﻧﺎﺕ ﰲ ﺷﻜﻞ ﻗﻮﺍﻋﺪ ﺑﻴﺎﻧـﺎﺕ ﻣﻮﺯﻋـﺔ.‬ ‫ﻭﺍﳌﱪﳎﻮﻥ ﺍﻟﻌﻠﻤﻴﻮﻥ ﲝﺎﺟﺔ ﻷﻥ ﻳﻔﻬﻤﻮﺍ ﻣﺒﺎﺩﺉ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻟﻴﻘﻮﻣـﻮﺍ ﺑﱪﳎـﺔ ﺣﺎﺳـﺒﺎﺕ‬ ‫ﺍﳌﺴﺘﻘﺒﻞ.‬ ‫ﻭﲨﻴﻊ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﻌﻤﻼﻗﺔ)‪ (Supercomputers‬ﻫﺬﻩ ﺍﻷﻳﺎﻡ ﺗﻌﺘﻤﺪ ﺑﺸﻜﻞ ﻛـﺒﲑ ﻋﻠـﻰ‬ ‫ﺍﻟﺘﻮﺍﺯﻱ . ﻭﻫﻰ ُﺴﺘﺨﺪﻡ ﰲ ﻣﺴﺘﻮﻯ ﺍﻟﱪﳎﻴـﺎﺕ ﻭﻛـﺬﻟﻚ ﺍﻟﺘـﺼﻤﻴﻢ ﺍﳍﻨﺪﺳـﻲ ﻟﻠﻌﺘـﺎﺩ‬ ‫ﺗ‬ ‫)‪ .(Hardware‬ﻭﺍﺷﺘﺪ ﺍﻟﺴﺒﺎﻕ ﺑﲔ ﺩﻭﻝ ﺍﻟﻌﺎﱂ ﻭﻟﻜﻲ ﺗﺘﻨﺎﻓﺲ ﰲ ﳎـﺎﻝ ﺍﻻﻗﺘـﺼﺎﺩ ﻓﺎﻟﺒﻠـﺪﺍﻥ‬ ‫ﺗﺘﻄﻠﺐ ﺍﻛﺘﺸﺎﻓﺎﺕ ﻋﻠﻤﻴﺔ ﻭﻣﻬﻨﺪﺳﲔ ﻭﻋﻠﻤﺎﺀ ﺣﺎﺳﺐ ﻟﻴﻘﻮﻣﻮﺍ ﺑﺘﻮﻇﻴﻒ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﻌﻤﻼﻗـﺔ‬ ‫ﺑﺸﻜﻞ ﺳﻠﻴﻢ.‬ ‫ﻟﻘﺪ ﺍﺳﺘﺨﺪﻣﻨﺎ ﻋﺒﺎﺭﺓ "ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ" ﻣﺮﺍﺕ ﻋﺪﻳﺪﺓ ﰲ ﻫﺬﺍ ﺍﻟﺴﻴﺎﻕ ﻭﺣﺎﻥ ﺍﻟﻮﻗﺖ ﻟﻠﺘﻤﻴﻴﺰ‬ ‫ﺑﲔ ﻫﺬﻩ ﺍﻟﻌﺒﺎﺭﺓ ﻭﻏﲑﻫﺎ ﻭﺧﺎﺻﺔ " ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ")ﺍﻧﻈﺮ ﺍﻟﻔﺼﻞ ﺍﻟﺮﺍﺑﻊ(، ﻭ ﻷﻥ ﳎﺎﻝ ﻋﻠـﻮﻡ‬ ‫ﺍﳊﺎﺳﺐ ﺗﻌﺘﺮﻳﻪ ﻣﺸﻜﻠﺔ ﺍﳌﺼﻄﻠﺤﺎﺕ. ﻭﻻ ﻳﻮﺟﺪ ﻫﻨﺎﻙ ﺗﻌﺮﻳﻒ ﻣﻮﺣﺪ ﻟﻠﻤﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻭﻟﺬﻟﻚ‬ ‫ﺳﻮﻑ ﻧﻮﺿﺢ ﻛﻴﻒ ﺳﻨﺴﺘﺨﺪﻡ ﻫﺬﺍ ﺍﻟﻠﻔﻆ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫11‬ ‫ﺍﳌﻌﺎﳉ ـﺔ ﺍﳌﺘﻮﺍﺯﻳ ـﺔ)‪ :(Parallel Processing‬ﻣﻌﺎﻟﺠ ﺔ اﻟﺤﺎﺱ ﺐ اﻵﻟ ﻲ ﻟﻌ ﺪة ﺏ ﺮاﻣﺞ ﻓ ﻲ ﺁن واﺣ ﺪ)ﺱ ﻮ ّﺎ(‬ ‫ی‬ ‫وﺏﺎﺱﺘﻌﻤﺎل ﻋﺪة وﺣﺪات ﻣﻌﺎﻟﺠﺔ ﺣﺴﺎﺏﻴﺔ وﻣﻨﻄﻘﻴﺔ.‬ ‫ﻭﺗﻌﺘﱪ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺣﻘﻼ ﺟﺰﺋﻴﺎ ﻣﻦ ﻋﻠﻢ ﺍﳊﺎﺳﺐ ﻭﺍﻟﱵ ﺗﺘﻀﻤﻦ ﻣﻔﺎﻫﻴﻢ ﻭ ﺃﻓﻜﺎﺭﹰﺍ ﻣﻦ‬ ‫َ‬ ‫ﹰﹰ‬ ‫ﻋﻠﻮﻡ ﺍﳊﺎﺳﺐ ﺍﻟﻨﻈﺮﻳﺔ ﻭ ﻫﻨﺪﺳﺔ ﺍﳊﺎﺳﺐ ﻭ ﻟﻐﺎﺕ ﺍﻟﱪﳎﺔ ﻭ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻭﳎﺎﻻﺕ ﺍﻟﺘﻄﺒﻴـﻖ‬ ‫ﻣﺜﻞ ﺍﻟﺬﻛﺎﺀ ﺍﻻﺻﻄﻨﺎﻋﻲ ﻭﺍﻟﺮﺳﻮﻡ.‬ ‫5.1 ﺗﻄﺒﻴﻘﺎﺕ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫ﲢﺘﺎﺝ ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﰲ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﺠﻤﻟﺎﻻﺕ ﺇﱃ ﻗﺪﺭﺓ ﺣﺴﺎﺑﻴﺔ ﻋﺎﻟﻴﺔ ﻭ ﺗﺘﺰﺍﻳﺪ ﻫﺬﻩ ﺍﳊﺎﺟﺔ ﻣﻊ‬ ‫ﺍﻻﻋﺘﻤﺎﺩ ﺑﺸﻜﻞ ﺃﻛﱪ ﻋﻠﻰ ﺍﳊﺎﺳﺒﺎﺕ ﰲ ﳎﺎﻻﺕ ﺍﻟﺼﻨﺎﻋﺔ ﻭ ﺍﻟﺰﺭﺍﻋﺔ ﻭﺍﻟﺘﺤﻜﻢ ﻭﺍﻟﺪﺭﺍﺳـﺎﺕ‬ ‫ﺍﳌﺨﺘﻠﻔﺔ . ﻭﻧﻠﻤﺲ ﺃﳘﻴﺔ ﺍﳊﺴﺎﺏ ﺍﻟﺴﺮﻳﻊ ﻭ ﺑﺎﻟﺘﺎﱄ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﰲ ﳎﺎﻻﺕ ﻣﺘﻌﺪﺩﺓ ﻧـﺬﻛﺮ‬ ‫ﻣﻨﻬﺎ :‬ ‫اﻟﻨﻤﺬﺟﺔ واﻟﻤﺤﺎآﺎة : ﺗﻨﺒﺆﺍﺕ ﺍﻷﺭﺻﺎﺩ ﺍﳉﻮﻳﺔ ، ﻋﻠﻢ ﺍﶈﻴﻄﺎﺕ ، ﺍﻟﻔﻴﺰﻳﺎﺀ ﺍﻟﻔﻠﻜﻴﺔ .‬ ‫اﻟﻬﻨﺪﺱ ﺔ : ﻣﻴﻜﺎﻧﻴﻚ ﺍﳌﻮﺍﺋﻊ ، ﺍﳍﻨﺪﺳﺔ ﺍﻟﻨﻮﻭﻳﺔ ، ﺍﳍﻨﺪﺳﺔ ﺍﻟﻜﻴﻤﻴﺎﺋﻴـﺔ ﺍﻟﺮﺑﻮﺗﻴـﺔ ، ﺍﻟـﺬﻛﺎﺀ‬ ‫ﺍﻻﺻﻄﻨﺎﻋﻲ ، ﻣﻌﺎﳉﺔ ﺍﻟﺼﻮﺭ ، ﻭﻏﲑ ﺫﻟﻚ.‬ ‫اﻟﺒﺤﺚ ﻋﻦ ﻣﺼﺎدر اﻟﻄﺎﻗﺔ : ﺍﻟﻜﺸﻒ ﻋﻦ ﺍﻟﺒﺘﺮﻭﻝ ﻭ ﺍﳌﻌـﺎﺩﻥ ، ﺍﻟﻜـﺸﻒ ﺍﳉﻴﻮﻟـﻮﺟﻲ ،‬ ‫ﺍﻟﺒﺤﺚ ﺍﻟﻄـﱯ ﻭ ﺍﻟﻌﺴﻜﺮﻱ .‬ ‫ﺗﻘﺎﺱ ﺍﻟﻘﺪﺭﺓ ﺍﳊﺴﺎﺑﻴﺔ ﻟﻠﺤﺎﺳﻮﺏ ﺑﻌﺪﺩ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﱵ ﳝﻜﻦ ﺃﻥ ﻳﻨﻔﺬﻫﺎ ﺑﺎﻟﺜﺎﻧﻴﺔ ﻋﻠﻰ ﺍﻷﻋﺪﺍﺩ‬ ‫ﺍﳊﻘﻴﻘﻴـﺔ))‪ .(Mega FLOPS (Floating Point Operation Per Second‬ﻭ ﺗﺘﻄﻠـﺐ‬ ‫ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﺴﺎﺑﻘﺔ ﻗﺪﺭﺓ ﺣﺴﺎﺑﻴﺔ ﻣﻦ ﺩﺭﺟﺔ ﺃﻟﻒ ﻣﻠﻴﻮﻥ ﻋﻤﻠﻴﺔ ﰲ ﺍﻟﺜﺎﻧﻴﺔ.ﻟﻘﺪ ﻭﺻـﻠﺖ ﻗـﺪﺭﺓ‬ ‫ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻔﻮﻗﺔ ﰲ ﻋﺎﻡ 3002 ﺇﱃ ‪ 35 TFLOPS‬ﻭﻫﻮ ﻣﺎ ﻳﻘـﺎﺭﺏ 5.63 ﺗﺮﻳﻠﻴـﻮﻥ ﻋﻤﻠﻴـﺔ‬ ‫ﺣﺴﺎﺑﻴﺔ ﻛﻞ ﺛﺎﻧﻴﺔ. ﻭﻣﻦ ﺃﺟﻞ ﲢﺴﲔ ﺍﻷﺩﺍﺀ ﳚﺐ ﺗﻘﺼﲑ ﺯﻣﻦ ﺩﻭﺭﺓ ﺍﳌﻌﺎﰿ ﻟﻠﻌﻤﻠﻴـﺔ ﺍﻟﻮﺍﺣـﺪﺓ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫21‬ ‫ﻭﺯﻳﺎﺩﺓ ﻋﺪﺩ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﱵ ﺗﻨﻔﺬ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ. ﻭﻟﻜﻦ ﺗـﺴﺮﻳﻊ ﺍﳌﻌﺎﳉـﺎﺕ ﻻ ﻳﻜﻔـﻲ ﺇﺫﺍ ﱂ‬ ‫ﻳﺘﺼﺎﺣﺐ ﺑﺎﻟﻘﺪﺭﺓ ﻋﻠﻰ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ ﺇﱃ ﻭﺣﺪﺓ ﺍﳌﻌﺎﳉﺔ ﺑﺎﻟﺴﺮﻋﺔ ﺍﻟﻜﺎﻓﻴﺔ ﻭ ﺗﻘﺎﺱ‬ ‫ﻫﺬﻩ ﺍﻟﻘﺪﺭﺓ ﺑﻌﺪﺩ ﺍﻟﺒﺎﻳﺘﺎﺕ ‪ Bytes‬ﺍﻟﱵ ﳝﻜﻦ ﻧﻘﻠﻬﺎ ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ ﺇﱃ ﻭﺣﺪﺓ ﺍﳌﻌﺎﳉﺔ ﰲ ﺍﻟﺜﺎﻧﻴﺔ .‬ ‫6.1 ﺗﻌﺮﻳﻒ ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ‬ ‫ﺇﺫﺍ ﻗﻠﻨﺎ ﺑﺄﻥ ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ ﻫﻮ ﺍﻟﺬﻱ ﻳﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ ﻓﺈﻧﻨﺎ ﻧﺸﻤﻞ ﲨﻴﻊ ﺃﺟﻬﺰﺓ ﺍﳊﺎﺳﺐ‬ ‫ﺍﳌﻮﺟﻮﺩﺓ! ﻭﲨﻴـﻊ ﺍﳊﺎﺳـﺒﺎﺕ ﺗـﺴﺘﺨﺪﻡ ﺍﻟﺘـﻮﺍﺯﻱ ﻋﻠـﻰ ﻣـﺴﺘﻮﻯ ﺑـﺴﻴﻂ ﰲ ﻃـﺮﻕ‬ ‫ﺍﻟﺒﻴﺎﻧﺎﺕ)‪ .(datapaths‬ﻭﻟﻜﻨﻨﺎ ﻧﻘﻮﻝ ﺑﺄﻥ ﺑﻌﺾ ﺍﻵﻻﺕ ﻣﺘﻮﺍﺯﻳﺔ ﻭﺍﻟﺒﻌﺾ ﺍﻵﺧﺮ ﻏﲑ ﻣﺘﻮﺍﺯﻳـﺔ.‬ ‫ﻭﳓﺘﺎﺝ ﺇﱃ ﺍﺧﺘﺒﺎﺭ ﻟﻜﻲ ﻧﻔﺮﻕ ﺑﻴﻨﻬﺎ، ﻭﺳﻨﻀﻊ ﺍﺧﺘﺒﺎﺭﹰﺍ ﻟﻴﺲ ﺩﻗﻴﻘﺎ ﻭﻟﻜﻨﺔ ﺫﻭ ﻋﻼﻗﺔ ﺑﺎﻻﺳﺘﺨﺪﺍﻡ‬ ‫ﹰ‬ ‫ﺍﻟﻌﺎﻡ. ﻓﺈﺫﺍ ﺍﺳﺘﻄﺎﻉ ﺍﳌﺴﺘﺨﺪﻡ ﺃﻥ ﻳﻜﺘﺐ ﺑﺮﻧﺎﳎﺎ ﳝﻜﻨﻪ ﺃﻥ ﻳﻘﺮﺭ ﻓﻴﻤﺎ ﺇﺫﺍ ﻛﺎﻧﺖ ﺍﻟﺼﻔﺔ ﺍﳍﻨﺪﺳﻴﺔ‬ ‫ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻮﺟﻮﺩﺓ ﻭ ﺍﻟﻘﻮﻝ ﺑﺄﻥ ﺍﻟﺘﻮﺍﺯﻱ ﻣﺮﺋﻲ.‬ ‫ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ )‪:(Parallel computer‬‬ ‫ه ﻮ ﺣﺎﺱ ﺐ ی ﺴﺘﺨﺪم ﻋ ﺪة ﻣﻌﺎﻟﺠ ﺎت ﺕﻌﻤ ﻞ ﺏ ﺸﻜﻞ ﻣﺘ ﺰاﻣﻦ )أي ﺕﻌﻤ ﻞ ﻓ ﻲ ﻧﻔ ﺲ اﻟﻮﻗ ﺖ( ﻟﺤ ﻞ‬ ‫ﻣﺴﺄﻟﺔ أو ﻷداء وﻇﻴﻔﺔ ﻣﻌﻴﻨﺔ.‬ ‫ﻭﳝﻜﻦ ﻟﻠﱪﳎﻴﺎﺕ ﺍﻟﱵ ُﺗﻜﺘﺐ ﻟﻠﺤﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ ﺃﻥ ﺗﺰﻳﺪ ﻛﻤﻴﺔ ﺍﻟﻌﻤﻞ ﺍﳌﻨﻔﺬ ﰲ ﻣﺪﺓ ﳏﺪﺩﺓ‬ ‫ﻣﻦ ﺍﻟﻮﻗ ﻭﺫﻟﻚ ﺑﺘﻘﺴﻴﻢ ﺍﳌﻬﺎﻡ ﺍﳊﺴﺎﺑﻴﺔ ﺑﲔ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ ﺗﻌﻤﻞ ﺑﺸﻜﻞ ﺁﱐ.‬ ‫7.1 ﺍﻟﺘﺴﺮﻳﻊ )‪(Speedup‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫31‬ ‫ﺇﻥ ﺍﻟﺘﻜﻨﻮﻟﻮﺟﻴﺎ ُﺤﺴﻦ ﺑﺎﺳﺘﻤﺮﺍﺭ ﺃﺩﺍﺀ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﻔﺮﺩﺓ. ﻭﺍﻟﺒﺸﺮ ﺑﺎﺭﻋﻮﻥ ﰲ ﺇﳚﺎﺩ ﺍﳌﺴﺎﺋﻞ‬ ‫ﺗﱢ‬ ‫ﺍﻟﱵ ﻳﺒﺪﻭ ﻣﻦ ﺃﺟﻠﻬﺎ ﺍﳌﻌﺎﰿ ﺍﳌﻔﺮﺩ ﺑﻄﻴﺌﺎ. ﻭﺑﺎﺳﺘﺨﺪﺍﻡ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﻫﻨﺎﻟﻚ ﺃﻣﻞ ﰲ‬ ‫ﹰ‬ ‫ﺣﻠﻬﺎ ﺑﺸﻜﻞ ﺳﺮﻳﻊ، ﻭﺑﺸﻜﻞ ﻣﺜﺎﱄ ﻓﺈﻧﻪ ﳝﻜﻦ ﺍﳊﺪﻳﺚ ﻋﻦ ﺍﻟﺘﺴﺮﻳﻊ ﻭﻓﻘﺎ ﻟﻠﺘﻌﺮﻳﻒ ﺍﻵﰐ:‬ ‫ﹰ‬ ‫ﺍﻟﺘﺴﺮﻳﻊ )‪ :(sp‬ﻫﻮ ﺣﺎﺻﻞ ﻗﺴﻤﺔ ﺍﻟﺰﻣﻦ ﺍﻟﺘﺴﻠﺴﻠﻲ )‪ (ts‬ﻋﻠﻰ ﺍﻟﺰﻣﻦ ﺍﳌﺘﻮﺍﺯﻱ )‪.(tp‬‬ ‫ﻭﻣﻦ ﺫﻟﻚ‬ ‫ﺣﻴﺚ‬ ‫ﻧﻜﺘﺐ :‬ ‫ﺃﻥ :‬ ‫‪sp= ts/ tp‬‬ ‫‪ : sp‬ﺍﻟﺘﺴﺮﻳﻊ.‬ ‫‪ :ts‬ﺍﻟﺰﻣﻦ ﺍﻟﺘﺴﻠﺴﻠﻲ .‬ ‫‪ :tp‬ﺍﻟﺰﻣﻦ ﺍﳌﺘﻮﺍﺯﻱ.‬ ‫ﳝﻜﻦ ﺗﻮﺿﻴﺢ ﻣﻔﻬﻮﻡ ﺍﻟﺘﺴﺮﻳﻊ ﻣﻦ ﺧﻼﻝ ﺍﳌﺜﺎﻝ ﺍﻟﺘﺎﱄ ﺍﳌﺒﺴﻂ :‬ ‫‪n‬ﻓﻠﻮ ﺃﻥ ﻋﺎﻣﻞ ﺍﻟﺒﻨﺎﺀ ﳛﺘﺎﺝ ﺇﱃ ﻭﺣﺪﺓ ﺯﻣﻦ ﻭﺍﺣﺪﺓ ﻟﺒﻨﺎﺀ ﺟﺪﺍﺭ، ﻓﻤﺎ ﻫﻮ ﺍﻟﺰﻣﻦ ﺍﻟﻼﺯﻡ ﻟـ‬ ‫ﻋﺎﻣﻞ ﰲ ﻫﺬﻩ ﺍﳌﻬﻤﺔ . ﻟﻨﻔﺮﺽ ﺃﻥ ﺍﳊﺎﻟﺔ ﻣﺜﺎﻟﻴﺔ ﻭﻫﻲ ﺃﻥ ﻋﻤﺎﻝ ﺍﻟﺒﻨﺎﺀ ﻻ ﻳﻌﻴﻖ ﺑﻌﻀﻬﻢ ﺍﻟﺒﻌﺾ ،‬ ‫ﻭﺣﺪﺓ ﺯﻣﻦ . ﻭﻟﻘﺪ ﻻﺣﻈﻨﺎ ‪1/n‬ﻓﻔﻲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﳚﺐ ﻋﻠﻰ ﺍﻟﺒﱠﺎﺋﲔ ﺃﻥ ﻳﻨﻬﻮﺍ ﺑﻨﺎﺀ ﺍﳉﺪﺍﺭ ﺧﻼﻝ‬ ‫ﻨ‬ ‫ﺑﺄﻥ:‬ ‫ﺍﻟﺘﺴﺮﻳﻊ =‬ ‫ﺍﻟﺰﻣﻦ ﺍﻟﺘﺴﻠﺴﻠﻲ‬ ‫ﺍﻟﺰﻣﻦ ﺍﳌﺘﻮﺍﺯﻱ‬ ‫ﻭﺍﻟﺰﻣﻦ ﺍﻟﺘﺴﻠﺴﻠﻲ ﻫﻨﺎ ﻟﻌﺎﻣﻞ ﺍﻟﺒﻨﺎﺀ ﺍﻟﻮﺍﺣﺪ.‬ ‫ﻭﺍﻟﺰﻣﻦ ﺍﳌﺘﻮﺍﺯﻱ ﻫﻮ ﺍﻟﺰﻣﻦ ﻟـ ‪ n‬ﻋﺎﻣﻞ ﺑﻨﺎﺀ.‬ ‫ﻭﺑﺎﻟﺘﺎﱄ ﻓﺈﻥ ﻧﺴﺒﺔ ﺍﻟﺘﺴﺮﻳﻊ ﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ:‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫‪sp =1/(1/n)=n‬‬ ‫41‬ ‫ﻭﺑﻨﺎﺀ ﻋﻠﻰ ﺍﻓﺘﺮﺍﺿﻨﺎ ﺍﻟﺴﺎﺑﻖ، ﻓﺈﻥ ‪ n‬ﻣﻦ ﻋﻤﺎﻝ ﺍﻟﺒﻨﺎﺀ ﺃﺳﺮﻉ ‪ n‬ﻣﺮﺓ ﻣﻦ ﺍﻟﻌﺎﻣﻞ ﺍﻟﻮﺍﺣﺪ.‬ ‫ً‬ ‫ﻭﰲ ﺍﳊﺎﻟﺔ ﺍﻟﻌﺎﻣﺔ ﻓﺈﻧﻪ ﻣﻊ ‪ n‬ﺿﻌﻒ ﺗﻜﻮﻥ ﺇﻣﻜﺎﻧﻴﺔ ﺍﻟﺘﺴﺮﻳﻊ ﺇﱃ ‪ n‬ﻣﺮﻩ.‬ ‫ﻟﻨﻔﺘﺮﺽ ﻭﺟﻮﺩ ﺧﻂ ﲡﻤﻴﻊ )ﲡﻤﻴﻊ ﺃﺟﺰﺍﺀ ﻣﻨﺘﺞ ﻣﺜﻼ( ﲝﻴﺚ ﺃﻥ ﺇﻧﺘـﺎﺝ ﻗﻄﻌـﺔ ﻭﺍﺣـﺪﺓ‬ ‫ﹰ‬ ‫ﻳﺴﺘﻠﺰﻡ ﺃﺩﺍﺀ ﺃﺭﺑﻌﺔ ﻣﻬﺎﻡ، ﻛﻞ ﻣﻬﻤﺔ ﻳﺆﺩﻳﻬﺎ ﻋﺎﻣﻞ ﻭﺍﺣﺪ .‬ ‫اﻟﺸﻜﻞ )1-1( :ﺧﻂ ﺗﺠﻤﻴﻊ ﻣﻜﻮن ﻣﻦ أرﺑﻊ ﻣﺤﻄﺎت ﻋﻤﻞ‬ ‫ﹰ‬ ‫ﻭ ﻟﻨﻔﺘﺮﺽ ﺃﻳﻀﺎ ﺃﻥ ﺍﳌﻬﺎﻡ ﺍﻷﺭﺑﻊ ﲢﺘﺎﺝ ﺇﱃ ‪ T‬ﻭﺣﺪﺓ ﺯﻣﻦ، ﻓﻔﻲ ﻭﺣﺪﺓ ﺍﻟﺰﻣﻦ ﺍﻷﻭﱃ ﺳﻴﻨﻔﺬ‬ ‫ﺍﻟﻌﺎﻣﻞ ﺍﻷﻭﻝ ﺍﳌﻬﻤﺔ ﺍﻷﻭﱃ ﻣﻦ ﺍﻟﻘﻄﻌﺔ ﺍﻷﻭﱃ، ﻭﰲ ﻭﺣﺪﺓ ﺍﻟﺰﻣﻦ ﺍﻟﺜﺎﻧﻴﺔ ﺳﻴﻨﻔﺬ ﺍﻟﻌﺎﻣـﻞ ﺍﻷﻭﻝ‬ ‫ﺍﳌﻬﻤﺔ ﺍﻷﻭﱃ ﻣﻦ ﺍﻟﻘﻄﻌﺔ ﺍﻟﺜﺎﻧﻴﺔ ﻭﺳﻴﻨﻔﺬ ﺍﻟﻌﺎﻣﻞ ﺍﻟﺜﺎﱐ ﺍﳌﻬﻤﺔ ﺍﻟﺜﺎﻧﻴﺔ ﻣﻦ ﺍﻟﻘﻄﻌﺔ ﺍﻷﻭﱃ ﻭﻫﻜﺬﺍ...‬ ‫ﺑﻌﺪ ﺃﺭﺑﻊ ﻭﺣﺪﺍﺕ ﺯﻣﻦ ﺳﻴﻜﺘﻤﻞ ﺇﻧﺘﺎﺝ ﺍﻟﻘﻄﻌﺔ ﺍﻷﻭﱃ ﻭﺳﺘﺨﺮﺝ ﻣﻦ ﺧﻂ ﺍﻹﻧﺘﺎﺝ، ﻭﺑﻌﺪ ﺫﻟﻚ‬ ‫ﺳﻴﻜﺘﻤﻞ ﺇﻧﺘﺎﺝ ﻗﻄﻌﺔ ﻭﺍﺣﺪﺓ ﰲ ﻛﻞ ﻭﺣﺪﺓ ﺯﻣﻦ. ﺩﻋﻨﺎ ﻧﻔﺘﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﻧﺘﺎﺝ ﻋﺸﺮ ﻗﻄﻊ، ﻓﻤﺎ‬ ‫ﻫﻮ ﺍﻟﺘﺴﺮﻳﻊ ﺍﳊﺎﺻﻞ ﰲ ﺧﻂ ﺍﻹﻧﺘﺎﺝ ﺑﺎﻟﻨﺴﺒﺔ ﻟﻌﺎﻣﻞ ﻭﺍﺣﺪ ﻳﺆﺩﻱ ﻛﻞ ﺍﳌﻬﺎﻡ ﺍﻷﺭﺑﻊ؟‬ ‫ﺇﻥ ﺍﻟﺰﻣﻦ ﻟﻌﺎﻣﻞ ﻭﺍﺣﺪ )1‪ (TW‬ﻫﻮ :‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫51‬ ‫‪TW1=10*4*T‬‬ ‫‪=40T‬‬ ‫ﺣﻴﺚ : ‪ : T‬ﻭﺣﺪﺓ ﺍﻟﺰﻣﻦ .‬ ‫ﺳﻴﺤﺘﺎﺝ ﺍﻟﻌﺎﻣﻞ ﻟـ ‪ 4T‬ﻟﺒﻨﺎﺀ ﻛﻞ ﻗﻄﻌﺔ ﻭﺑﺎﻟﺘﺎﱄ ﺳﻴﺤﺘﺎﺝ ﺇﱃ ‪ 40T‬ﻟﺒﻨﺎﺀ ﺍﻟﻌﺸﺮ ﻗﻄﻊ.‬ ‫ﺃﻣﺎ ﰲ ﻧﻈﺎﻡ ﺍﻟﺘﺠﻤﻴﻊ، ﻓﻨﺤﺘﺎﺝ ﻹﻛﻤﺎﻝ ﺇﻧﺘﺎﺝ ﺍﻟﻘﻄﻌﺔ ﺍﻷﻭﱃ ﺇﱃ ‪ ... 4T‬ﻭﺑﺎﻗﻲ ﺍﻟﻘﻄﻊ ﺳﺘﺤﺘﺎﺝ‬ ‫ﺇﱃ ‪.1T‬‬ ‫ﺇﺫﻥ ﺇﻥ ﺯﻣﻦ ﺧﻂ ﺍﻟﺘﺠﻤﻴﻊ )‪ (TL‬ﳛﺴﺐ ﺑﺎﻟﻌﻼﻗﺔ :‬ ‫‪TL = 4 * T + (10 − 1) * T‬‬ ‫ﻭﲟﺎ ﺃﻥ :‬ ‫اﻟﺰﻣﻦ ﻟﻌﺎﻣﻞ واﺣﺪ‬ ‫اﻟﺘﺴﺮیﻊ ﻟﺨﻂ اﻟﺘﺠﻤﻴﻊ =‬ ‫اﻟﺰﻣﻦ ﻟﺨﻂ اﻟﺘﺠﻤﻴﻊ‬ ‫ﻓﺈﻥ ﺍﻟﺘﺴﺮﻳﻊ ﳋﻂ ﺍﻟﺘﺠﻤﻴﻊ ﻫﻮ :‬ ‫70.3 = ‪= 40 T‬‬ ‫‪13 T‬‬ ‫1‪TW‬‬ ‫‪TL‬‬ ‫ﻭﻹﻧﺘﺎﺝ ﻋﺸﺮ ﻗﻄﻊ ﻓﺎﻟﻌﻤﺎﻝ ﺍﻷﺭﺑﻌﺔ ﰲ ﺧﻂ ﺍﻟﺘﺠﻤﻴﻊ ﺗﻘﺮﻳﺒﺎ ﺃﺳﺮﻉ ﺑﺜﻼﺙ ﻣﺮﺍﺕ ﻣﻦ ﺍﻟﻌﺎﻣـﻞ‬ ‫ﹰ‬ ‫ﺍﻟﻮﺍﺣﺪ .‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫61‬ ‫ﻟﻨﻔﺘﺮﺽ ﺃﻳﻀﺎ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﻧﺘﺎﺝ ‪ k‬ﻗﻄﻌﺔ : ﺇﺫﻥ :‬ ‫ﹰ‬ ‫‪TW1 = T * 4 * k‬‬ ‫ﻭﺑﺎﻟﺘﺎﱄ ﻓﺈﻥ : ‪TL = 4 * T + (k - 1) * T‬‬ ‫ﺇﻥ ﺍﻟﺴﻠﻮﻙ ﺃﻭ ﺍﻟﺴﲑ ﺍﻟﺘﻘﺮﻳﱯ ﻟﻠﺘﺴﺮﻳﻊ ﻋﻨﺪﻣﺎ ﻧﻨﺘﺞ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﻘﻄﻊ ﺃﻭ ﻋﻨﺪﻣﺎ ﺗﻘﺘﺮﺏ ‪ k‬ﻣـﻦ‬ ‫∞ ﳝﻜﻦ ﻣﺘﺎﺑﻌﺘﻪ ﻣﻦ ﺧﻼﻝ ﺍﳌﻨﺎﻗﺸﺔ ﺍﻟﺮﻳﺎﺿﻴﺔ ﺍﻵﺗﻴﺔ ﻓﺈﻥ ﺍﻟﺘﺴﺮﻳﻊ ﳋﻂ ﺍﻟﺘﺠﻤﻴﻊ )‪(Sp‬ﻛﺎﻵﰐ :‬ ‫‪4*k‬‬ ‫‪3+ k‬‬ ‫‪k * 4*t‬‬ ‫= ‪4 * t + (k − 1) * t‬‬ ‫= ‪Sp‬‬ ‫وﺑﻘﺴﻤﺔ اﻟﺒﺴﻂ واﻟﻤﻘﺎم ﻋﻠﻰ ‪ k‬ﻧﺴﺘﻨﺘﺞ أن :‬ ‫4‬ ‫3‬ ‫1+‬ ‫‪k‬‬ ‫= ‪Sp‬‬ ‫ﻭﻋﻨﺪﻣﺎ ﺗﺴﻌﻰ ‪ K‬ﺇﱃ ﺍﻟﻼﻬﻧﺎﻳﺔ ﻓﺈﻥ ﺍﻟﺘﺴﺮﻳﻊ ﳋﻂ ﺍﻟﺘﺠﻤﻴﻊ ﻳﺴﺎﻭﻱ ﺇﱃ 4 ، ﻟﺬﺍ ﻓﺈﻧﻪ ﰲ ﺧـﻂ‬ ‫ﲡﻤﻴﻊ ﺫﻭ ﺃﺭﺑﻌﺔ ﳏﻄﺎﺕ، ﻓﺈﻥ ﺍﻟﺘﺴﺮﻳﻊ ﺍﻟﺘﻘﺮﻳﺒﱯ ﻫﻮ ﺃﺭﺑﻌﺔ.‬ ‫ﻭﺑﺸﻜﻞ ﻋﺎﻡ: ﺑﺎﻓﺘﺮﺍﺽ ﺗﺴﺎﻭﻱ ﺍﻟﺰﻣﻦ ﰲ ﻛﻞ ﳏﻄﺔ ﻓﺈﻥ ‪ n‬ﳏﻄﺔ ﰲ ﺧـﻂ ﺍﻟﺘﺠﻤﻴـﻊ ﺗﻘﺮﻳﺒـﺎ‬ ‫ﹰ‬ ‫ﺳﻴﻜﻮﻥ ﳍﺎ ﺗﺴﺮﻳﻊ ﻳﺴﺎﻭﻱ )‪.(n‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫71‬ ‫8.1 ﺃﺷﻜﺎﻝ ﻣﻌﺎﳉﺔ ﺍﳌﻌﻄﺎﺕ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻫﻲ ﺷﻜﻞ ﻣﻦ ﺃﺷﻜﺎﻝ ﻣﻌﺎﳉﺔ ﺍﳌﻌﻄﻴﺎﺕ ﺗﺴﻤﺢ ﺑﺘﻨﻔﻴﺬ ﻋﺪﺩ ﻣﻦ ﺍﻷﺣـﺪﺍﺙ‬ ‫ﺍﳌﺘﺰﺍﻣﻨﺔ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ . ﻫﺬﻩ ﺍﻷﺣﺪﺍﺙ ﺍﳌﺘﻮﺍﺯﻳﺔ ﳝﻜـﻦ ﺃﻥ ﺗﻜــﻮﻥ ﻋﻠﻰ ﻣﺴﺘﻮﻳﺎﺕ ﳐﺘﻠﻔﺔ :‬ ‫1.8.1 ﻣﺴﺘﻮى اﻟﺒﺮاﻣﺞ‬ ‫)‪(Programs‬‬ ‫ﻳﺘﻢ ﺗﻨﻔﻴﺬ ﻋﺪﺩ ﻣﻦ ﺍﻟﱪﺍﻣﺞ ﺍﳌﺴﺘﻘﻠﺔ ﻋﻦ ﺑﻌﻀﻬﺎ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ ﻭ ﺗﺴﺘﺨﺪﻡ ﻣﺒﺎﺩﺉ ﺗﻌﺪﺩﻳﺔ‬ ‫ﺍﻟﱪﳎﻴﺎﺕ )‪ (Multiprograms‬ﻭ ﺍﳌﺸﺎﺭﻛﺔ ﺍﻟﺰﻣﻨﻴﺔ )‪ ( Temporal Partcipation‬ﻭﺗﻌﺪﺩﻳﺔ‬ ‫ﺍﳌﻌﺎﳉﺔ )‪ (Multiteatment‬ﻣﻦ ﺃﺟﻞ ﲢﻘﻴﻖ ﺫﻟﻚ .‬ ‫اﻟﺸﻜﻞ )2-1(: ﺕﻌﺪدﻳﺔ اﻟﺒﺮﻣﺠﻴﺎت‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫81‬ ‫ﻳﺴﺘﺜﻤﺮ ﻫﺬﺍ ﺍﳌﺴﺘﻮﻯ ﻋﻠﻰ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﻜﺒﲑﺓ ﻭ ﺗﻜﻮﻥ ﺍﳌﻌﺎﳉﺔ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﺷـﻔﺎﻓﺔ ﺑﺎﻟﻨـﺴﺒﺔ‬ ‫ﻟﻠﻤﺴﺘﺨﺪﻡ ﺣﻴﺚ ﻳﺘﻮﱃ ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﻣﻬﻤﺔ ﺇﺩﺍﺭﻬﺗﺎ ﻭ ﺗﻄﻮﻳﺮﻫﺎ . ﳝﻜﻦ ﺑﺎﻻﻋﺘﻤﺎﺩ ﻋﻠﻰ ﻣﺒـﺪﺃ‬ ‫ﺗﻌﺪﺩﻳﺔ ﺍﻟﱪﳎﻴﺎﺕ ، ﺗﻨﻔﻴﺬ ﻋﺪﺓ ﺑﺮﺍﻣﺞ ﳌﺴﺘﺨﺪﻡ ﻭﺍﺣﺪ ﺃﻭ ﻋﺪﺓ ﺑﺮﺍﻣﺞ ﳌـﺴﺘﺨﺪﻣﲔ ﳐـﺘﻠﻔﲔ .‬ ‫ﻭﺗﻌﺘﻤﺪ ﺇﺣﺪﻯ ﻃﺮﻕ ﺗﻌﺪﺩﻳﺔ ﺍﻟﱪﳎﻴﺎﺕ ﻋﻠﻰ ﺗﻘﺴﻴﻢ ﺯﻣﻦ ﺍﻟﻮﺣﺪﺓ ﺍﳌﺮﻛﺰﻳﺔ ﺇﱃ ﳎﺎﻻﺕ ﺯﻣﻨﻴـﺔ‬ ‫ﻣﺘﺴﺎﻭﻳﺔ ﲝﻴﺚ ﺗﺸﻐﻞ ﺍﻟﻮﺣﺪﺓ ﺍﳌﺮﻛﺰﻳﺔ ﺑﺮﺍﻣﺞ ﳐﺘﻠﻔﺔ ﺑﺸﻜﻞ ﺩﻭﺭﻱ ﺧﻼﻝ ﺍﺠﻤﻟـﺎﻻﺕ ﺍﻟﺰﻣﻨﻴـﺔ‬ ‫ﺍﳌﺨﺘﻠﻔﺔ، ﻛﻤﺎ ﻫﻮ ﻣﻮﺿﺢ ﰲ ﺍﻟﺸﻜﻞ )2-1 ( ، ﻓﻔﻲ ﺍﻟﺘﻨﻔﻴﺬ ﺍﻟﺘﺴﻠﺴﻠﻲ ﺗﻨﻔﻴﺬ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻷﻭﱃ )‪(A‬‬ ‫ﻭﻣﻦ ﰒ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻟﺜﺎﻧﻴﺔ )‪ (B‬ﻭﺃﺧﲑﹰﺍ ﺗﻨﻔﻴﺬ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻟﺜﺎﻟﺜﺔ )‪ . (C‬ﺃﻣﺎ ﺍﻟﺘﻨﻔﻴﺬ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﻓُﺘﻘـﺴﻢ‬ ‫ﱠ‬ ‫ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﺜﻼﺙ ﺇﱃ ﺃﺟﺰﺍﺀ ﺯﻣﻨﻴﺔ ﻣﺘﺴﺎﻭﻳﺔ ﻭ ﺗﻨﻔﺬ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﲝﻴﺚ ﺗﻨﻔﺬ ﺍﻟﻌﻤﻠﻴﺎﺕ ﻋﻠـﻰ‬ ‫ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﺘﺎﱄ :3‪. A1,B1,C1,A2,B2,C2,A3,B3,C‬‬ ‫2.8.1 ﻣﺴﺘﻮى اﻹﺟﺮاﺋﻴﺔ‬ ‫)‪(Procedure‬‬ ‫ﻳﺘﻄﻠﺐ ﻫﺬﺍ ﺍﳌﺴﺘﻮﻯ ﺗﻘﺴﻴﻢ ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﻮﺍﺣﺪ ﺇﱃ ﻋﺪﺓ ﻣﻬﻤﺎﺕ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ. ﺇﻥ ﺗﻘـﺴﻴﻢ‬ ‫ﺍﻟﱪﺍﻣﺞ ﻋﻠﻰ ﻫﺬﺍ ﺍﻟﺸﻜﻞ ﻟﻴﺲ ﺑﺎﻟﻌﻤﻞ ﺍﻟﺴﻬﻞ ﻓﻤﻬﻤﺎﺕ ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﻮﺍﺣﺪ ﻣﺮﺗﺒﻄﺔ ﻓﻴﻤﺎ ﺑﻴﻨـﻬﺎ ﻭ‬ ‫ﻏﺎﻟﺒﺎ ﻣﺎ ﻳﻌﺘﻤﺪ ﺗﻨﻔﻴﺬ ﻣﻬﻤﺔ ﻣﺎ ﻋﻠﻰ ﻧﺘﺎﺋﺞ ﺍﳌﻬﻤﺎﺕ ﺍﻷﺧﺮﻯ ﻟﺬﺍ ﻻﺑﺪ ﻣﻦ ﺍﻟﺒﺤﺚ ﻋﻦ ﻋﻼﻗـﺎﺕ‬ ‫ﹰ‬ ‫ﺍﻟﺘﺒﻌﻴﺔ ﻭﻣﻦ ﰒ ﺑﺮﳎﺔ ﺍﳌﻬﻤﺎﺕ ﻏﲑ ﺍﳌﺮﺗﺒﻄﺔ ﺃﻭ ﺍﳌﺮﺗﺒﻄﺔ ﺟﺰﺋﻴﺎ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ.‬ ‫ﹰ‬ ‫ﻳﻘﻮﻡ ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﺑﺈﺩﺍﺭﺓ ﻭﻣﻌﺎﳉﺔ ﻫﺬﺍ ﺍﳌﺴﺘﻮﻯ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﺇﺫﺍ ﰎ ﺍﺳـﺘﺨﺪﺍﻡ ﺃﺩﻭﺍﺕ ﺁﻟﻴـﺔ‬ ‫ﻟﻠﺘﺍﺯﻱ ﺃﻭ ﻣﺘﺮﲨﺎﺕ ﺫﻛﻴﺔ ﻭﻟﻜﻦ ﻫﺬﻩ ﺍﻷﺩﻭﺍﺕ ﻣﺎ ﺯﺍﻟﺖ ﰲ ﳎﺎﻝ ﺍﻟﺒﺤﺚ ﻭﱂ ﺗﺜﺒﺖ ﻓﻌﺎﻟﻴﺘـﻬﺎ‬ ‫ﺑﻌﺪ، ﻭﺑﺎﻟﺘﺎﱄ ﻓﻐﺎﻟﺒﺎ ﻣﺎ ﻳﻘﻮﻡ ﺍﳌﺴﺘﺨﺪﻡ ﺑﺘﺤﻠﻴﻞ ﺍﻟﱪﺍﻣﺞ ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻭ ﲢﻮﻳﻠﻬﺎ ﺇﱃ‬ ‫ﹰ‬ ‫ﺑﺮﺍﻣﺞ ﻣﺘﻮﺍﺯﻳﺔ.‬ ‫ﻳﺒﲔ ﺍﳌﺜﺎﻝ ﺍﻟﺘﺎﱄ ﻫﺬﺍ ﺍﳌﺴﺘﻮﻯ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ :‬ ‫ﻳﻘﻮﻡ ﺃﺣﺪ ﺍﻟﱪﺍﻣﺞ ﺑﺘﻨﻔﻴﺬ ﻋﻤﻠﻴﺘﲔ ﻋﻠﻰ ﺍﻷﺷﻌﺔ ، ﺍﻷﻭﱃ : ﺿﺮﺏ ﻋﻨﺎﺻﺮ ﺍﻟﺸﻌﺎﻉ ﺑﺜﺎﺑﺖ )‪ (a‬ﻭ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫91‬ ‫ﺍﻟﺜﺎﻧﻴﺔ ﲨﻊ ﺛﺎﺑﺖ )‪ (b‬ﺇﱃ ﻋﻨﺎﺻﺮ ﺍﻟﺸﻌﺎﻉ. ﻭ ﺑﺎﻟﺘﺎﱄ ﻳﻘﻮﻡ ﺑﺎﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﻟﺘﺎﻟﻴﺔ ﻛﻤﺎ ﰲ ﺍﻟـﺸﻜﻞ‬ ‫)3-1(:‬ ‫• ﻗﺮﺍﺀﺓ ﻋﻨﺎﺻﺮ ﺍﻟﺸﻌﺎﻉ ‪.V‬‬ ‫• ﺿﺮﺏ ﻋﻨﺎﺻﺮ ﺍﻟﺸﻌﺎﻉ ‪ V‬ﺑﺎﻟﻌﺪﺩ ‪ a‬ﻭ ﲣﺰﻳﻦ ﺍﻟﻨﺘﺎﺋﺞ ﰲ ﺍﻟﺸﻌﺎﻉ ‪. X‬‬ ‫• ﲨﻊ ‪ b‬ﺇﱃ ﻋﻨﺎﺻﺮ ﺍﻟﺸﻌﺎﻉ ‪ V‬ﻭ ﲣﺰﻳﻦ ﺍﻟﻨﺘﺎﺋﺞ ﰲ ﺍﻟﺸﻌﺎﻉ ‪.Y‬‬ ‫• ﻃﺒﺎﻋﺔ ﺍﻟﻨﺘﺎﺋﺞ : ﺍﻟﺸﻌﺎﻉ ‪ X‬ﻭﺍﻟﺸﻌﺎﻉ ‪.Y‬‬ ‫اﻟﺸﻜﻞ )3-1 ( : ﺕﺤﻮﻳﻞ ﺏﺮﻥﺎﻣﺞ ﺕﺴﻠﺴﻠﻲ إﻟﻰ ﻣﺘﻮازي ) ﻣﺴﺘﻮى اﻹﺟﺮاﺋﻴﺎت (‬ ‫3.8.1 ﻣﺴﺘﻮى اﻟﺘﻌﻠﻴﻤﺎت‬ ‫)‪(Instructions‬‬ ‫ﻳﻮﺟﺪ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﺘﻘﻨﻴﺎﺕ ﺍﻟﱵ ﺗﻌﺘﻤﺪ ﻣﺒﺪﺃ ﺗﻨﻔﻴﺬ ﻋﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﻣﺴﺘﻘﻠﺔ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﻋﻠـﻰ‬ ‫ﺍﻟﺘﻮﺍﺯﻱ ﻭ ﺃﺷﻬﺮﻫﺎ ﺗﻘﻨﻴﺔ ﻣﻌﺎﳉﺔ ﺍﳉﺪﺍﻭﻝ ﺍﻟﱵ ﺗﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﺗﻌﻠﻴﻤﺔ ﻭﺣﻴﺪﺓ ﺗﻌﺎﰿ ﻣﻌﻄﻴﺎﺕ ﳐﺘﻠﻔﺔ‬ ‫ﻋﻠﻰ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ .‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫02‬ ‫ﻳﻌﺎﰿ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻣﺴﺘﻮﻯ :‬ ‫• ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﻧﻈﺮﹰﺍ ﻟﺘﻮﻓﺮ ﺍﻷﺩﻭﺍﺕ ﺍﻵﻟﻴﺔ ﺍﳌﺴﺎﻋﺪﺓ ﻋﻠﻰ ﲢﻮﻳﻞ ﺍﻟﱪﺍﻣﺞ .‬ ‫• ﺍﻟﻠﻐﺎﺕ ﺍﻟﱪﳎﻴﺔ ﺣﻴﺚ ﺗﺘﻮﻓﺮ ﻋﻠﻰ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺸﻌﺎﻋﻴﺔ ﻟﻐﺎﺕ ﺑﺮﳎﻴﺔ ﺧﺎﺻﺔ ﻟﻠﱪﳎﺔ ﺍﻟـﺸﻌﺎﻋﻴﺔ‬ ‫ﻣﺜﻞ : ‪. FORTRAN Victories‬‬ ‫ﻳﺴﺘﺨﺪﻡ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﺑﺮﺍﻣﺞ ﺍﳊﺴﺎﺏ ﺍﻟﻌﻤﻠﻲ ﺑﺸﻜﻞ ﺧــﺎﺹ ﺣﻴﺚ ﺗﻌﺎﰿ‬ ‫ﺍﻷﺷﻌﺔ ﻭ ﺍﳌﺼﻔﻮﻓﺎﺕ . ﻭ ﻳﺒﲔ ﺍﻟﺸﻜﻞ )4-1( ﲢﻮﻳﻞ ﺑﺮﻧﺎﻣﺞ ﺗﺴﻠﺴﻠﻲ ﺇﱃ ﻣﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻣﺴﺘﻮﻯ‬ ‫ﺍﻟﺘﻌﻠﻴﻤﺎﺕ .‬ ‫اﻟﺸﻜﻞ )4-1( : ﺕﺤﻮﻳﻞ ﺏﺮﻥﺎﻣﺞ ﺕﺴﻠﺴﻠﻲ إﻟﻰ ﻣﺘﻮازي )ﻣﺴﺘﻮى اﻟﺘﻌﻠﻴﻤﺎت(‬ ‫4.8.1 ﻣﺴﺘﻮى اﻟﺘﻌﻠﻴﻤﺔ‬ ‫)‪(Instruction‬‬ ‫ﺗﻌﺘﻤﺪ ﺗﻘﻨﻴﺎﺕ ﺍﻟﻌﻤﻞ ﺍﻟﻀﺨﻲ )‪ (Pipeline‬ﻣﺒﺪﺃ ﺗﻘﺴﻴﻢ ﺍﻟﺘﻌﻠﻴﻤﺔ ﺍﻟﻮﺍﺣﺪﺓ ﺇﱃ ﺗﻌﻠﻴﻤﺎﺕ ﺟﺰﺋﻴﺔ‬ ‫ﻣﺘﺘﺎﻟﻴﺔ ﲝﻴﺚ ﳝﻜﻦ ﺗﻨﻔﻴﺬ ﻫﺬﻩ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺍﳉﺰﺋﻴﺔ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ ﻋﻠﻰ ﻣﻌﻄﻴﺎﺕ ﳐﺘﻠﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫12‬ ‫ﻳﺘﻢ ﺍﻻﺳﺘﻔﺎﺩﺓ ﻣﻦ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﻟﻌﺘﺎﺩ، ﻭﻫـﻮ ﺷـﻔﺎﻑ ﺑﺎﻟﻨـﺴﺒﺔ‬ ‫ﻟﻠﻤﺴﺘﺨﺪﻡ ﰲ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳊﺎﻟﻴﺔ. ﻭ ﺗﺴﺘﺨﺪﻡ ﺃﻏﻠــﺐ ﺍﳌﻌﺎﳉﺎﺕ ﻭﺍﻟﺒﻄﺎﻗﺎﺕ ﺍﳌﺘﺨﺼﺼﺔ ﺫﺍﺕ‬ ‫ﺍﻷﺩﺍﺀ ﺍﻟﻌﺎﱄ ﰲ ﻳﻮﻣﻨﺎ ﻫﺬﺍ ﻣﺒﺪﺃ ﺍﻟﻌﻤﻞ ﺍﻟﻀﺨﻲ ﰲ ﺗﺼﻤﻴﻤﻬﺎ. ﻳﺒﲔ ﺍﳌﺜﺎﻝ ﺍﻟﺘﺎﱄ ﻫﺬﺍ ﺍﳌﺴﺘﻮﻯ ﻣﻦ‬ ‫ﺍﻟﺘﻮﺍﺯﻱ:‬ ‫ﳝﻜﻦ ﺗﻘﺴﻴﻢ ﻋﻤﻠﻴﺔ ﺿﺮﺏ ﻋﺪﺩﻳﻦ ﳑﺜﻠﲔ ﺑﺎﻟﻔﺎﺻﻠﺔ ﺍﻟﻌﺎﺋﻤﺔ ﺇﱃ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳉﺰﺋﻴﺔ ﺍﻟﺘﺎﻟﻴﺔ :‬ ‫١- ﻣﻘﺎﺭﻧﺔ ﺍﻟﻘﻮﻯ.‬ ‫٢- ﻭﺿﻊ ﺍﻟﺮﻗﻤﲔ ﺑﻨﻔﺲ ﺍﻟﻘﻮﺓ.‬ ‫٣- ﺍﻟﻘﻴﺎﻡ ﺑﻌﻤﻠﻴﺔ ﲨﻊ ﺍﻟﻌﺪﺩﻳﻦ.‬ ‫٤- ﻛﺘﺎﺑﺔ ﺍﻟﻨﺘﻴﺠﺔ ﺑﺸﻜﻞ ﻣﻀﺒﻮﻁ.‬ ‫ﻭﺑﺎﻋﺘﺒﺎﺭ ﻫﺬﺍ ﺍﻟﺘﻘﺴﻴﻢ ﳝﻜﻦ ﺇﺟﺮﺍﺀ ﻋﺪﺓ ﻋﻤﻠﻴﺎﺕ ﺿﺮﺏ ﻋﻠﻰ ﺳﻠﺴﻠﺔ ﻣﻦ ﺍﻷﻋـﺪﺍﺩ ﺑـﻨﻔﺲ‬ ‫ﺍﻟﻮﻗﺖ، ﻭﺫﻟﻚ ﺑﺈﺟﺮﺍﺀ ﺍﳌﺮﺍﺣﻞ ﺍﳌﺨﺘﻠﻔﺔ ﻋﻠﻰ ﺛﻨﺎﺋﻴﺎﺕ ﺃﻋﺪﺍﺩ ﳐﺘﻠﻔﺔ. ﻭﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)5-1(‬ ‫ﻃﺮﻳﻘﺔ ﺇﺟﺮﺍﺀ ﺍﻟﻌﻤﻠﻴﺔ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫22‬ ‫اﻟﺸﻜﻞ)5-1(: اﻟﻌﻤﻞ اﻟﻀﺨﻲ )‪(Pipeline‬‬ ‫ﻭﺃﺧﲑﹰﺍ ﲡﺪﺭ ﺍﻹﺷﺎﺭﺓ ﺇﱃ ﺃﻥ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺍﳌﺘﻘﺪﻣﺔ ﺗﺴﺘﺜﻤﺮ ﺍﳌﺴﺘﻮﻳﺎﺕ ﺍﻷﺭﺑﻌـﻪ ﻣـﻦ‬ ‫ﺍﻟﻌﻤﻞ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ.‬ ‫9.1 ﻣﻮﺟﺰ ﻟﺘﺎﺭﻳﺦ ﺍﳊﺎﺳﺒﺎﺕ‬ ‫ﺇﻥ ﺍﻟﺘﺎﺭﻳﺦ ﺍﻟﻜﺎﻣﻞ ﻟﻠﺤﺎﺳﺒﺎﺕ ﻭﺍﳊﻮﺳﺒﺔ ﳚﺐ ﺃﻥ ﻳﺘﻄﺮﻕ ﻟﻌﺪﺩ ﻛﺒﲑ ﻣﻦ ﺍﻵﻻﺕ ﺍﳌﺘﻨﻮﻋﺔ‬ ‫ﹼ‬ ‫ﻣﺜﻞ ﺍﳌﻌﺪﺍﺩ )‪ (Anacuy‬ﺍﻟﺬﻱ ﻛﺎﻥ ﻳﺴﺘﺨﺪﻣﻪ ﺍﻟﺼﻴﻨﻴﻮﻥ ﺍﻟﻘﺪﻣﺎﺀ، ﻛﺬﻟﻚ ﺍﻵﻟﺔ ﺍﻟﺘﺤﻠﻴﻠﻴـﺔ ﺍﻟـﱵ‬ ‫ﺍﺧﺘﺮﻋﻬﺎ "ﺑﺎﺑﺎﺝ" ﻭ "ﻟﻮﻭﻡ"، ﺃﻳﻀﺎ ﳚﺐ ﺃﻥ ﻳﺘﻄﺮﻕ ﻟﻠﻌﻤﺎﺭﺓ ﺍﻟﺮﻗﻤﻴﺔ ﻭﺍﻟﺘﻤﺎﺛﻠﻴـﺔ) ‪analog and‬‬ ‫ﹰ‬ ‫‪ (digital‬ﻟﻠﺤﺎﺳﺒﺎﺕ.‬ ‫ﺇﻥ ﺗﻄﻮﺭ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺮﻗﻤﻴﺔ ﻏﺎﻟﺒﺎ ُﻳﻘﺴﻢ ﺇﱃ ﺃﺟﻴﺎﻝ. ﻛﻞ ﺟﻴﻞ ﻳﺘﻤﻴﺰ ﺑﺘﻄﻮﺭ ﻣﻠﺤـﻮﻅ ﻋـﻦ‬ ‫ﹰ‬ ‫ﺳﺎﺑﻘﻪ ﰲ ﺍﻟﺘﻘﻨﻴﺔ ﺍﳌﺴﺘﺨﺪﻣﺔ ﻟﺒﻨﺎﺀ ﺍﳊﺎﺳﺒﺎﺕ، ﻛﺬﻟﻚ ﺍﻟﺘﻨﻈﻴﻢ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻨﻈﻢ ﺍﳊﺎﺳﺐ، ﻭﻟﻐـﺎﺕ‬ ‫ﺍﻟﱪﳎﺔ. ﻭﺑﺎﻟﺮﻏﻢ ﻣﻦ ﺃﻥ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻋﺎﺩﺓ ﻻ ﺗﺆﺧﺬ ﺑﻌﲔ ﺍﻻﻋﺘﺒﺎﺭ ﰲ ﺃﺟﻴﺎﻝ ﺍﳊﺎﺳﺒﺎﺕ، ﺇﻻ ﺃﻬﻧﺎ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫32‬ ‫ﺃﻳﻀﺎ ﺗﺘﻄﻮﺭ ﻭﺑﺜﺒﺎﺕ، ﻭﻳﺸﻤﻞ ﺫﻟﻚ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺴﺘﺨﺪﻣﺔ ﰲ ﺍﻟﻌﻠﻮﻡ ﺍﳊﺎﺳﻮﺑﻴﺔ. ﺍﻟﺘﺎﺭﻳﺦ ﺍﻟﺘﺎﱄ‬ ‫ﹰ‬ ‫ﺳﻴﻌﺘﻤﺪ ﻋﻠﻰ ﻫﺬﻩ ﺍﻷﺷﻴﺎﺀ ﻛﻤﻌﺎﱂ ﻟﺘﻘﺴﻴﻢ ﺍﻷﺟﻴﺎﻝ.‬ ‫اﻟﻌﺼ اﻟﻤﻴﻜﺎﻧﻴﻜﻲ )3261-5491(‬ ‫ﺇﻥ ﻓﻜﺮﺓ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻵﻻﺕ ﳊﻞ ﺍﳌﺴﺎﺋﻞ ﺍﻟﺮﻳﺎﺿﻴﺔ ﳝﻜﻦ ﺇﺭﺟﺎﻋﻬﺎ ﺇﱃ ﺃﻭﺍﺋﻞ ﺍﻟﻘﺮﻥ ﺍﻟـﺴﺎﺑﻊ‬ ‫ﹼ‬ ‫ﻋﺸﺮ ﺍﳌﻴﻼﺩﻱ. ﺍﻟﺬﻳﻦ ﺻﻤﻤﻮﺍ ﻭﻧﻔﺬﻭﺍ ﺍﻵﻻﺕ ﺍﳊﺎﺳﺒﺔ ﺍﻟﱵ ﳝﻜﻦ ﳍﺎ ﺃﻥ ﺗﺆﺩﻱ ﻋﻤﻠﻴﺎﺕ ﺍﳉﻤﻊ‬ ‫ﻭﺍﻟﻄﺮﺡ ﻭﺍﻟﻀﺮﺏ ﻭﺍﻟﻘﺴﻤﺔ ﻫﺆﻻﺀ ﺍﻟﻌﻠﻤﺎﺀ ﻫﻢ "‪ "Schickhard‬ﻭ "‪ "Pascal‬ﻭ "‪."Leibnitz‬‬ ‫ﺇﻥ ﺃﻭﻝ ﺁﻟﺔ ﺣﺴﺎﺑﻴﺔ ﻣﺘﻌﺪﺩﺓ ﺍﻷﻏﺮﺍﺽ ﻫﻲ ﺁﻟﺔ "ﺑﺎّﺑﺎﺝ" ﺍﳌـﺴﻤﺎﺓ ﺑــ "ﺁﻟـﺔ ﺍﻟﻔـﺮﻕ" ﺃﻭ‬ ‫"‪ ،"Difference Engine‬ﻭﺍﻟﱵ ﺑﺪﺃ ﻬﺑﺎ ﰲ ﻋﺎﻡ 3281 ﻭﻟﻜﻨﻬﺎ ﱂ ﺗﻜﺘﻤﻞ. ﺁﻟﺔ ﺃﺧـﺮﻯ ﻃﻤﻮﺣـﺔ‬ ‫ﻭﻫﻲ ﺍﻵﻟﺔ ﺍﻟﺘﺤﻠﻴﻠﻴﺔ ﻭﺍﻟﱵ ﺑﺪﺃ ﺑﺎﺑﺎﺝ ﺑﺘﺼﻤﻴﻤﻬﺎ ﰲ ﺍﻟﻌﺎﻡ 2481 ﻭﻟﻜﻨﻬﺎ ﱂ ﺗﻜﺘﻤﻞ ﺃﻳﻀﺎ.‬ ‫ﹰ‬ ‫اﻟﺠﻴﻞ اﻷول ﻟﻠﺤﺎﺱﺒﺎت اﻵﻟﻴﺔ‬ ‫ﻓﺘﺮﺓ ﺍﳉﻴﻞ ﺍﻷﻭﻝ ﻣﻦ ﺃﻭﻝ ﻇﻬﻮﺭ ﻟﻠﺤﺎﺳﺐ ﰲ ﻣﻨﺘﺼﻒ ﺍﻷﺭﺑﻌﻴﻨﺎﺕ ﺇﱃ ﺃﻥ ﺃﻧﺘﺞ ﺍﳊﺎﺳـﺐ‬ ‫056 ‪.(1956) IBM‬‬ ‫• ﻛﺎﻧﺖ ﺍﳊﺎﺳﺒﺎﺕ ﰲ ﺍﳉﻴﻞ ﺍﻷﻭﻝ ﺗﺴﺘﺨﺪﻡ ﺍﻟﺼﻤﺎﻣﺎﺕ ﺍﳌﻔﺮﻏﺔ ﻛﻌﻨﺎﺻﺮ ﲢﻮﻳﻞ ﺃﺳﺎﺳـﻴﺔ،‬ ‫ﻭﺗﺴﺘﺨﺪﻣﻬﺎ ﺃﻳﻀﺎ ﻟﻠﺬﺍﻛﺮﺓ.‬ ‫ﹰ‬ ‫• ﻗﺪﺭﺓ ﺍﳌﻌﺎﳉﺔ ﻟﺘﻠﻚ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺒﻜﺮﺓ ﺗﻘﺪﺭ ﺑـﻌﺸﺮﺓ ﺁﻻﻑ ﺗﻌﻠﻴﻤﺔ ﻛﻞ ﺛﺎﻧﻴﺔ.‬ ‫• ﺃﻣﺎ ﻋﻦ ﺍﻟﺘﺨﺰﻳﻦ ﻓﻜﺎﻥ ﺑﺈﻣﻜﺎﻥ ﺗﻠﻚ ﺍﳊﺎﺳﺒﺎﺕ ﲣﺰﻳﻦ ٠٠٠٢ ﺣﺮﻑ ﺃﲜﺪﻱ ﺃﻭ ﺭﻗﻤﻲ.‬ ‫• ﻇﻬﺮ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺁﻟﺔ "ﻓﻮﻥ ﻧﻴﻮﻣﺎﻥ" ‪ IAS‬ﻛﺎﻧﺖ ﺍﻷﻭﱃ ﰲ ﺗﻮﻇﻴﻒ ﻋﻠـﻢ ﺍﳊـﺴﺎﺏ‬ ‫ﺍﳌﺘﻮﺍﺯﻱ، ﻭﺍﻟﱵ ﺑﺪﺃ "ﻓﻮﻥ ﻧﻴﻮﻣﺎﻥ" ﺑﺘﺼﻤﻴﻤﻬﺎ ﰲ ﻋﺎﻡ 6491، ﻭﱂ ﺗﻜﺘﻤﻞ ﲜﻤﻴﻊ ﻭﻇﺎﺋﻔﻬﺎ ﺇﻻ‬ ‫ﰲ ﻋﺎﻡ 2591.‬ ‫• ﻇﻬﺮ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺃﻭﻝ ﺁﻟﺔ ﲡﺎﺭﻳﺔ ﺗﺴﺘﺨﺪﻡ ﺍﳊﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ ﻭﻫﻲ ﺍﻵﻟﺔ 107 ‪.IBM‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫42‬ ‫• ﻛﺎﻥ ﺍﳊﺎﺳﺐ ﺍﻵﱄ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺗﺴﻠﺴﻠﻲ ﺍﻟﻌﻤﺎﺭﺓ، ﻭﻓﻴـﻪ ﻃﺮﺣـﺖ ﺍﻗﺘﺮﺍﺣـﺎﺕ ﻵﻻﺕ‬ ‫ﻣﺘﻮﺍﺯﻳﺔ ﻭﺑﺪﺭﺟﺎﺕ ﻣﺘﻔﺎﻭﺗﺔ، ﻭﻟﻜﻨﻬﺎ ﱂ ﺗﻜﻦ ﺗﺘﺠﺎﻭﺯ ﻣﺮﺣﻠﺔ ﺍﻟﺘﻨﻤﻴﻂ )ﺍﻟﻨﻤﺬﺟﺔ(.‬ ‫• ﻛﺎَﻧﺖ ﺍﻟﱪﺍﻣﺞ ﰲ ﺃﻭﻝ ﺍﳉﻴﻞ ﺗﻜﺘﺐ ﺑﻠﻐﺔ ﺍﻵﻟﺔ، ﻭﻣﻊ ﺑﺪﺀ ﺍﳋﻤﺴﻴﻨﺎﺕ ﺑﺪﺃ ﺍﺳـﺘﺨﺪﺍﻡ ﻟﻐـﺔ‬ ‫ﺍﻟﺘﺠﻤﻴﻊ ﻭﻛﺎﻧﺖ ﺍﻟﺘﺮﲨﺔ ﻣﻦ ﻟﻐﺔ ﺍﻟﺘﺠﻤﻴﻊ ﺇﱃ ﻟﻐﺔ ﺍﻵﻟﺔ ﺗﺘﻢ ﻳﺪﻭﻳﺎ، ﻭﺑﻌﺪ ﺫﻟﻚ ﰎ ﻋﻤﻞ ﺍﺠﻤﻟﻤـﻊ‬ ‫ﹰ‬ ‫‪ Assembler‬ﻭﺍﻟﺬﻱ ﻳﻘﻮﻡ ﺑﻌﻤﻠﻴﺔ ﲢﻮﻳﻞ ﺍﻟﱪﺍﻣﺞ ﻣﻦ ﻟﻐﺔ ﺍﻟﺘﺠﻤﻴﻊ ﺇﱃ ﻟﻐﺔ ﺍﻵﻟﺔ.‬ ‫• ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺸﻬﻮﺭﺓ ﰲ ﺍﳉﻴﻞ ﺍﻷﻭﻝ :‬ ‫ ‪ ENIAC‬ﻭﺍﺳُﺘﺨﺪﻡ ﺣﻴﻨﻬﺎ ﻟﻠﺤﺴﺎﺑﺎﺕ ﺍﳋﺎﺻﺔ ﺑﺎﻟﻘﻨﺒﻠﺔ ﺍﳍﻴﺪﺭﻭﺟﻴﻨﻴﺔ ﺍﻟﱵ ﺃﻟﻘﻴﺖ ﻋﻠـﻰ‬‫ﻫﲑﻭﺷﻤﺎ ﻭﻧﺎﺟﺎﺯﺍﻛﻲ.‬ ‫ ‪EDVAC‬‬‫ ‪ UNIVAC‬ﻭﺍﻟﺬﻱ ﺍﺳﺘﺨﺪﻡ ﻟﻠﺘﻨﺒﺆ ﺑﻨﺘﻴﺠﺔ ﺍﻻﻧﺘﺨﺎﺑﺎﺕ ﺍﻟﺮﺋﺎﺳـﻴﺔ ﻟﻠﻮﻻﻳـﺎﺕ ﺍﳌﺘﺤـﺪﺓ‬‫ﺍﻷﻣﺮﻳﻜﻴﺔ ﻟﻌﺎﻡ 2591.‬ ‫اﻟﺠﻴﻞ اﻟﺜﺎﻧﻲ‬ ‫ﲤﺘﺪ ﻓﺘﺮﺓ ﻫﺬﺍ ﺍﳉﻴﻞ ﻣﻦ ﻋﺎﻡ 7591 ﻭﺣﱴ ﻋﺎﻡ 3691، ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺣﺪﺛﺖ ﺗﻄﻮﺭﺍﺕ ﻫﺎﻣﺔ‬ ‫ﺟﺪﹰﺍ ﻋﻠﻰ ﻛﻞ ﺍﳌﺴﺘﻮﻳﺎﺕ ﻣﻦ ﺑﻨﺎﺀ ﺍﻟﺪﺍﺭﺍﺕ ﺍﻷﺳﺎﺳﻴﺔ ﺇﱃ ﻟﻐﺎﺕ ﺍﻟﱪﳎﺔ، ﻭﻣﻦ ﺍﻟﺴﻤﺎﺕ ﺍﳌﻤﻴـﺰﺓ‬ ‫ﻟﻪ:‬ ‫• ﺍﺳُﺘﺨﺪﻡ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺍﻟﺘﺮﺍﻧﺰﺳﺘﻮﺭﺍﺕ ﺑﺪﻻ ﻣﻦ ﺍﻟﺼﻤﺎﻣﺎﺕ ﺍﳌﻔﺮﻏﺔ.‬ ‫ﹰ‬ ‫ِ‬ ‫• ﻗﺪﺭﺓ ﺍﳌﻌﺎﳉﺔ ﳊﺎﺳﺒﺎﺕ ﺍﳉﻴﻞ ﺍﻟﺜﺎﱐ ﺗﻘﺪﺭ ﺑـﻤﺎﺋﱵ ﺃﻟﻒ ﺗﻌﻠﻴﻤﺔ ﻛﻞ ﺛﺎﻧﻴﺔ.‬ ‫• ﳝﻜﻦ ﳊﺎﺳﺒﺎﺕ ﻫﺬﺍ ﺍﳉﻴﻞ ﲣﺰﻳﻦ 00023 ﺣﺮﻑ ﺃﲜﺪﻱ ﺃﻭ ﺭﻗﻤﻲ.‬ ‫• ﰎ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺇﻧﺘﺎﺝ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﻟﻐﺎﺕ ﺍﻟﱪﳎﺔ ﺍﻟﻌﺎﻟﻴـﺔ ﺍﳌـﺴﺘﻮﻯ، ﻣﺜـﻞ ‪FORTRAN‬‬ ‫)6591(، ﻭ ‪ ،(1958)ALGOL‬ﻭ ‪.(1958)COBOL‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫52‬ ‫• ﻣﻦ ﺍﻵﻻﺕ ﺍﻟﺘﺠﺎﺭﻳﺔ ﺍﳍﺎﻣﺔ ﰲ ﺍﳉﻴﻞ ﺍﻟﺜﺎﱐ ﻫﻲ ﺍﻵﻟﺔ‬ ‫407‬ ‫‪IBM‬‬ ‫ﻭﻛﺎﻥ ﻓﻴﻬﺎ ﻭﺣﺪﺓ ﻟﻠﻨﻘﻄﺔ‬ ‫ﺍﻟﻌﺎﺋﻤﺔ ﺍﳊﺴﺎﺑﻴﺔ. ﺟﺎﻋﻼ ﻣﻨﻬﺎ ﻭﺑﻨﺠﺎﺡ ﺃﺳﺮﻉ ﻣﻦ ﺍﻵﻻﺕ ﺍﻟﺴﺎﺑﻘﺔ. ﻭﻫﻲ ﺍﻵﻟﺔ ﺍﻷﻛﺜﺮ ﳒﺎﺣﺎ ﰲ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﺍﺳﺘﺨﺪﺍﻡ ﺍﳊﺴﺎﺏ ﺍﳌﺘﻮﺍﺯﻱ ﰲ ﻭﻗﺘﻬﺎ.‬ ‫• ﻭﻣﻦ ﺍﻵﻻﺕ ﺍﳍﺎﻣﺔ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺍﻵﻟـﺔ 4907 ‪ ،IBM‬ﻭﻓﻴﻬـﺎ ﺍﺳـﺘﺨﺪﻣﺖ ﻣﻌﺎﳉـﺎﺕ‬ ‫ﺩﺧﻞ/ﺧﺮﺝ )‪ (I/O‬ﻟﻴﺘﻢ ﺍﻟﺘﻮﺍﺻﻞ ﺑﲔ ﺍﻟﺬﺍﻛﺮﺓ ﺍﻟﺮﺋﻴﺴﻴﺔ ﻭﺃﺩﻭﺍﺕ ﺍﻹﺩﺧﺎﻝ ﻭﺍﻹﺧـﺮﺍﺝ ﺑـﺸﻜﻞ‬ ‫ﺃﻓﻀﻞ.‬ ‫• ﺷﻬﺪ ﻫﺬﺍ ﺍﳉﻴﻞ ﻇﻬﻮﺭ ﺃﻭﻝ ﺣﺎﺳﺒﲔ ﻋﻤﻼﻗﲔ ﺻﻤﻤﺎ ﺧﺼﻴـﺼﺎ ﳌﻌﺎﳉـﺔ ﺍﻷﻋـﺪﺍﺩ ﰲ‬ ‫ﹰ‬ ‫ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﻌﻠﻤﻴﺔ. ﺍﳊﺎﺳﺒﺎﻥ ﳘﺎ ‪ LARC‬ﻭ 0307 ‪ ) IBM‬ﻭﻳﻄﻠﻖ ﻋﻠﻰ ﺍﻷﺧﲑ ﺃﻳﻀﺎ ‪.(Stretch‬‬ ‫ﹰ‬ ‫ﻭ ﻛﺎﻥ ﺍﳊﺎﺳﺐ 0307 ‪ IBM‬ﺃﻭﻝ ﺣﺎﺳﺐ ﻳﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﺍﻟﺬﺍﻛﺮﺓ، ﻭﺫﻟـﻚ ﻟـﺘﻤﻜﲔ‬ ‫ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﻐﻨﺎﻃﻴﺴﻴﺔ ﺍﻟﺒﻄﻴﺌﺔ ﻟﻠﺘﻤﺎﺷﻲ/ﺍﻟﺘﻮﺍﻓﻖ ﻣﻊ ﺍﳌﻌﺎﳉﺎﺕ ﺍﻷﺳﺮﻉ.‬ ‫اﻟﺠﻴﻞ اﻟﺜﺎﻟﺚ‬ ‫ﲤﺘﺪ ﻓﺘﺮﺓ ﻫﺬﺍ ﺍﳉﻴﻞ ﻣﻦ ﻋﺎﻡ 4691 ﻭﺣﱴ ﻣﻨﺘﺼﻒ ﺍﻟﺴﺒﻌﻴﻨﺎﺕ، ﻭﳝﻜﻦ ﺗﻠﺨﻴﺺ ﻭ ﲢﺪﻳﺪ ﺃﻫﻢ‬ ‫ﺍﳌﻌﺎﱂ ﳍﺬﺍ ﺍﳉﻴﻞ ﰲ ﺍﻵﰐ :‬ ‫• ﺍﺳُﺘﺨﺪﻡ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺍﻟﺪﻭﺍﺋﺮ ﺍﳌﺘﻜﺎﻣﻠﺔ ‪) Ics‬ﺍﻟﺪﻭﺍﺋﺮ ﺍﳌﺘﻜﺎﻣﻠﺔ ﻫﻲ ﺃﺷﺒﺎﻩ ﻣﻮﺻﻼﺕ ﻭﻋﺪﺩ‬ ‫ﻣﻦ ﺍﻟﺘﺮﺍﻧﺰﺳﺘﻮﺭﺍﺕ ﲨﻌﺖ ﰲ ﻗﻄﻌﺔ ﻭﺍﺣﺪﺓ (‬ ‫• ﻇﻬﺮ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺃﻭﻝ ﺣﺎﺳﺐ ﻣﺘﻮﺍﺯﻱ، ﻭﻫﻮ ﺍﳊﺎﺳﺐ ‪ Illiac IV‬ﻋﺎﻡ )2791(.‬ ‫• ﻗﺪﺭﺓ ﺍﳌﻌﺎﳉﺔ ﳊﺎﺳﺒﺎﺕ ﺍﳉﻴﻞ ﺍﻟﺜﺎﻟﺚ ﺗﻘﺪﺭ ﲞﻤﺲ ﻣﻼﻳﲔ ﺗﻌﻠﻴﻤﺔ ﻛﻞ ﺛﺎﻧﻴﺔ.‬ ‫ّ‬ ‫• ﳝﻜﻦ ﳊﺎﺳﺒﺎﺕ ﻫﺬﺍ ﺍﳉﻴﻞ ﲣﺰﻳﻦ ﻣﻠﻴﻮﱐ ﺣﺮﻑ ﺃﲜﺪﻱ ﺃﻭ ﺭﻗﻤﻲ.‬ ‫• ﺍﺳُﺘﺨﺪﻡ ﻟﻠﺬﺍﻛﺮﺓ ﺃﺷﺒﺎﻩ ﺍﳌﻮﺻﻼﺕ ﺑﺪﻻ ﻣﻦ ﺍﳌﻐﻨﺎﻃﻴﺲ.‬ ‫ﹰ‬ ‫• ﻇﻬﺮﺕ ﰲ ﻫﺬﺍ ﺍﳉﻴـﻞ ﺗﻘﻨﻴـﺔ ﻓﻌﺎﻟـﺔ ﻟﺘـﺼﻤﻴﻢ ﺍﳌﻌﺎﳉـﺎﺕ ﺍﳌﻌﻘـﺪﺓ ﻭﻫـﻲ ﺗﻘﻨﻴـﺔ‬ ‫)‪( microprogramming‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫62‬ ‫• ﻇﻬﺮﺕ ﻧﻈﻢ ﺍﻟﺘﺸﻐﻴﻞ.‬ ‫• ﺑﺪﺃ ﻣﺼﻤﻤﻮ ﺍﳊﺎﺳﺒﺎﺕ ﺑﺎﻻﺳﺘﻔﺎﺩﺓ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺫﻟﻚ ﺑﺎﺳﺘﺨﺪﺍﻣﻬﻢ ﻟﻮﺣﺪﺍﺕ ﻭﻇﺎﺋﻔﻴﺔ‬ ‫ﻣﺘﻌﺪﺩﺓ.‬ ‫• ﻛﺎﻥ ﺍﳊﺎﺳﺐ‬ ‫0066‬ ‫‪ CDC‬ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳍﺎﻣﺔ ﰲ ﺃﻭﺍﺋﻞ ﻫﺬﺍ ﺍﳉﻴﻞ )4691( ﻭﻛـﺎﻥ‬ ‫ﺃﻭﻝ ﺣﺎﺳﺐ ﻳﺴﺘﺨﺪﻡ ﺍﻟﺘﻮﺍﺯﻱ ﺍﻟﻮﻇﻴﻔﻲ، ﻓﻜﺎﻥ ﻟﻪ ﻋﺸﺮ ﻭﺣﺪﺍﺕ ﻭﻇﻴﻔﻴﺔ ﳝﻜﻨﻬﺎ ﺍﻟﻌﻤـﻞ ﰲ‬ ‫ﻧﻔﺲ ﺍﻟﻮﻗﺖ.‬ ‫• ﻭﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﻇﻬﺮ ﺃﻭﻝ ﺣﺎﺳﺐ ﻳﺴﺘﺨﺪﻡ ﻣﻌﺎﳉﺎﺕ ﻣﻮﺟﻬﺔ )‪ (Vector processor‬ﻭﻫﻮ‬ ‫ﺍﳊﺎﺳﺐ 0067 ‪) CDC‬ﺃﻧﺘﺞ ﰲ 9691(.‬ ‫• ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳍﺎﻣﺔ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ 502 ‪CRAY 1, IBM 360 and 370 series ,CYBER‬‬ ‫• ﰲ ﻋﺎﻡ 2791 ﺃﻧﺘﺠﺖ ﻟﻐﺔ ﺍﻟﱪﳎﺔ ‪ ،C‬ﺃﻳﻀﺎ ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ "ﻳﻮﻧﻴﻜﺲ".‬ ‫ﹰ‬ ‫اﻟﺠﻴﻞ اﻟﺮاﺏﻊ‬ ‫ﲤﺘﺪ ﻓﺘﺮﺓ ﻫﺬﺍ ﺍﳉﻴﻞ ﻣﻦ ﻣﻨﺘﺼﻒ ﺍﻟﺴﺒﻌﻴﻨﺎﺕ ﻭﺣﱴ ﺃﻭﺍﺧﺮ ﺍﻟﺜﻤﺎﻧﻴﻨﺎﺕ، ﻧﻠﺨﺺ ﺃﻫﻢ ﻣﻌـﺎﱂ‬ ‫ﻫﺬﺍ ﺍﳉﻴﻞ ﰲ ﺍﻵﰐ:‬ ‫• ﺍﺳُﺘﺨﺪﻡ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺍﻟﺪﻭﺍﺋﺮ ﺍﳌﺘﻜﺎﻣﻠﺔ ﺍﻟﻮﺍﺳـﻌﺔ‬ ‫ﺍﺳﺘﺨﺪﻡ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ ﺍﳌﻌﺎﰿ ﺍﻟﺪﻗﻴﻖ )‪(Micro-processor‬‬ ‫• ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳍﺎﻣﺔ ﰲ ﻫﺬﺍ ﺍﳉﻴﻞ:‬ ‫)‪(Large scale integration‬‬ ‫ﻛﻤـﺎ‬ ‫ ‪ CRAY X-MP‬ﻧﻈﺎﻡ ﺑﺄﺭﺑﻌﺔ ﻣﻌﺎﳉﺎﺕ.‬‫ 2 ‪ CRAY‬ﺃﻭﻝ ﺣﺎﺳﺐ ﻳﺘﻤﻜﻦ ﻣﻦ ﺗﻨﻔﻴﺬ ﻣﻠﻴﺎﺭ ﻋﻤﻠﻴﺔ ﺣﺴﺎﺑﻴﺔ ﰲ ﺍﻟﺜﺎﻧﻴﺔ.‬‫- ‪ CYBERplus‬ﻭ ﺍﺳﺘﺨﺪﻡ ﻓﻴﻪ ﻧﻈﺎﻡ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ ﻣﺘﻮﺍﺯﻳﺔ )٤٦ ﻣﻌﺎﰿ ﻋﺎﱄ ﺍﻷﺩﺍﺀ(‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫72‬ ‫أﺟﻴﺎل اﻟﻤﺴﺘﻘﺒﻞ‬ ‫ﻳﺒﺪﺃ ﻫﺬﺍ ﺍﳉﻴﻞ ﻣﻦ ﺃﻭﺍﺋﻞ ﺍﻟﺘﺴﻌﻴﻨﺎﺕ ،ﻭﺑﺪﺃﺕ ﺍﻷﲝﺎﺙ ﺗﺘﺠﻪ ﰲ ﺍﲡﺎﻫﲔ ﰲ ﳏﺎﻭﻟﺔ ﶈﺎﻛﺎﺓ‬ ‫ﺍﻟﻌﻘﻞ ﺍﻟﺒﺸﺮﻱ. ﺍﻻﲡﺎﻩ ﺍﻷﻭﻝ ﳛﺎﻭﻝ ﲤﺜﻴﻞ ﺍﳊﺎﺳﻮﺏ ﻛﺸﺒﻜﺎﺕ ﻋﺼﺒﻴﺔ ﻭﻫـﻮ ﻣـﺎ ﻳﻌـﺮﻑ‬ ‫ﺑﺎﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻮﻧﻴﺔ ﺍﻻﺻﻄﻨﺎﻋﻴﺔ) ‪ ،(Artifical Neural Network‬ﻭﺍﻻﲡﺎﻩ ﺍﻵﺧﺮ ﳛـﺎﻭﻝ‬ ‫ﺑﺎﻟﺘﻌﺎﻭﻥ ﻣﻊ ﻋﻠﻤﺎﺀ ﺍﳍﻨﺪﺳﺔ ﺍﻟﻮﺭﺍﺛﻴﺔ، ﰲ ﺇﻧﺘﺎﺝ ﺭﻗﺎﻗﺔ ﺣﻴﻮﻳﺔ ﻭﺫﻟﻚ ﺑﺘﻜﻴﻴﻒ ﺍﻟﱪ ﻭﺗﻴﻨﺎﺕ ﻟﺘﺤـﻞ‬ ‫ﳏﻞ ﺍﻟﺴﻴﻠﻴﻜﻮﻥ ﰲ ﺍﻟﺪﻭﺍﺋﺮ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺔ. ﻭﲣﺪﻡ ﺃﲝﺎﺙ ﻫﺬﻳﻦ ﺍﻻﲡﺎﻫﲔ ﺃﺳﺎﺱ ﺍﳉﻴـﻞ ﺍﻟﻘـﺎﺩﻡ‬ ‫ﻟﻠﺤﻮﺍﺳﻴﺐ.‬ ‫ﻭﳝﻜﻦ ﺇﳚﺎﺯ ﺍﻷﲝﺎﺙ ﺍﻟﱵ ﲡﺮﻱ ﻣﻦ ﺃﺟﻞ ﺣﻮﺍﺳﻴﺐ ﺍﳌﺴﺘﻘﺒﻞ ﰲ ﳎﻤﻮﻋﺔ ﻣﻦ ﺍﻻﲡﺎﻫﺎﺕ‬ ‫ﺃﻭ ﺍﶈﺎﻭﺭ. ﺃﻭﻝ ﻫﺬﻩ ﺍﻻﲡﺎﻫﺎﺕ ﳜﺘﺺ ﺑﺎﳌﻜﻮﻧﺎﺕ ﺍﳌﺎﺩﻳﺔ)‪ ،(Hardware‬ﻭﺛﺎﱐ ﻫﺬﻩ ﺍﻻﲡﺎﻫﺎﺕ‬ ‫ﻫﻮ ﺃﺳﺎﻟﻴﺐ ﺍﻟﻌﻤﻞ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺍﻻﺗﺼﺎﻻﺕ، ﻭﺛﺎﻟﺚ ﻫﺬﻩ ﺍﻻﲡﺎﻫﺎﺕ ﳜـﺘﺺ ﺑﺎﻟﱪﳎﻴـﺎﺕ‬ ‫)‪.(Software‬‬ ‫ﻓﻔﻲ ﳎﺎﻝ ﺍﳌﻜﻮﻧﺎﺕ ﺍﳌﺎﺩﻳﺔ ﺗﺘﺰﺍﻳﺪ ﺇﻣﻜﺎﻧﻴﺎﺕ ﻭﺳﺮﻋﺎﺕ ﺍﳌﻌﺎﳉـﺎﺕ ﺍﳌﻨﺘﺠـﺔ. ﻭﰲ ﳎـﺎﻝ‬ ‫ﺍﳌﻜﻮﻧﺎﺕ ﺍﳌﺎﺩﻳﺔ ﻏﲑ ﺍﻟﻔﻌﺎﻟﺔ ﻣﺜﻞ ﺍﻟﺬﺍﻛﺮﺓ ﺗﺘﺰﺍﻳﺪ ﺃﺣﺠﺎﻡ ﺍﻟﺬﺍﻛﺮﺓ ﰲ ﺍﻟﺮﻗﺎﻗﺔ ﺍﻟﻮﺍﺣﺪﺓ.‬ ‫ﻭﰲ ﳎﺎﻝ ﺃﺳﺎﻟﻴﺐ ﺍﻟﻌﻤﻞ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺍﻻﺗﺼﺎﻻﺕ ﻓﻘﺪ ﺗﺒﲔ ﺇﻣﻜـﺎﻥ ﺗﻨﻔﻴـﺬ ﻣﻼﻳـﲔ‬ ‫ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﰲ ﺍﻟﺜﺎﻧﻴﺔ ﺍﻟﻮﺍﺣﺪﺓ ﻭﺫﻟﻚ ﻋﻦ ﻃﺮﻳﻖ ﺍﺳﺘﺨﺪﺍﻡ ﺃﻛﺜﺮ ﻣﻦ ﻣﻌﺎﰿ. ﻭﺗـﺒﲔ ﺃﻥ ﺗﻌـﺎﻭﻥ‬ ‫ﺍﳌﻌﺎﳉﺎﺕ ﰲ ﺗﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﻳﻜﻮﻥ ﺃﻳﺴﺮ ﺇﺫﺍ ﻛﺎﻧﺖ ﺗﻌﻠﻴﻤﺎﺕ ﻫﺬﻩ ﺍﳌﻌﺎﳉﺎﺕ ﺑﺴﻴﻄﺔ.‬ ‫ﻭﰲ ﳎﺎﻝ ﺍﻟﱪﳎﻴﺎﺕ ﻭﺻﻠﺖ ﺑﻌﺾ ﺍﻟﱪﺍﻣﺞ ﺍﳋﺎﺻﺔ ﺑﺎﳉﻴـﻞ ﺍﳊـﺎﱄ ﻣﺜـﻞ ﺑـﺮﺍﻣﺞ ﺍﻟـﻨﻈﻢ‬ ‫ﺍﳋﺒﲑﺓ)‪ (Expert System‬ﺇﱃ ﻣﺮﺣﻠﺔ ﺍﻟﻨﻀﻮﺝ. ﻭﻣﺎﺯﺍﻟﺖ ﻫﻨﺎﻙ ﺣﺎﺟﺔ ﺇﱃ ﺍﳌﺰﻳﺪ ﻣﻦ ﺍﻟﻌﻤﻞ‬ ‫ﰲ ﳎﺎﻻﺕ ﺍﻟﺬﻛﺎﺀ ﺍﻻﺻﻄﻨﺎﻋﻲ ﻟﺘﻄﻮﻳﺮ ﻭﺳﺎﺋﻞ ﺇﺩﺧﺎﻝ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻭﺍﻟﺘـﺴﺎﺅﻻﺕ ﺇﱃ ﺍﳊﺎﺳـﻮﺏ‬ ‫ﺑﻄﺮﻳﻘﺔ ﻃﺒﻴﻌﻴﺔ ﺃﻛﺜﺮ ﻣﺜﻞ ﺍﻟﺘﺨﺎﻃﺐ ﻣﻊ ﺍﳊﺎﺳﻮﺏ ﺑﻠﻐﺔ ﻃﺒﻴﻌﻴﺔ ﻭﺍﻟﻜﺘﺎﺑﺔ ﺑﺄﺳﺎﻟﻴﺐ ﺑﺴﻴﻄﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫82‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻧﻲ: ﺗﺼﻨﻴﻒ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫ﰎ ﰲ ﺍﻟﻌﻘﺪﻳﻦ ﺍﻷﺧﲑﻳﻦ ﺗﺼﻤﻴﻢ ﻭﺇﻧﺸﺎﺀ ﺍﻟﻜﺜﲑ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ. ﻭﳝﻜﻦ ﺃﻥ ﻧﺼﻨﻔﻬﺎ‬ ‫ﰲ ﳎﻤﻮﻋﺎﺕ ﺍﻋﺘﻤﺎﺩﹰﺍ ﻋﻠﻰ ﺍﻟﺴﻤﺎﺕ ﺍﳌﺸﺘﺮﻛﺔ. ﻭﲤﻜﻨﻨﺎ ﺧﻄﺔ ﺍﻟﺘﺼﻨﻴﻒ ﻫﺬﻩ ﻣﻦ ﺩﺭﺍﺳﺔ ﺁﻟـﺔ ﺃﻭ‬ ‫ﺃﻛﺜﺮ ﻛﻨﻤﻮﺫﺝ ﻟﻜﻞ ﳎﻤﻮﻋﺔ ﳑﺎ ﻳﺴﺎﻋﺪﻧﺎ ﻋﻠﻰ ﻓﻬﻢ ﺃﻓﻀﻞ ﻟﻠﻤﺠﻤﻮﻋﺔ. ﻭﻟﺴﻮﺀ ﺍﳊﻆ، ﱂ ﳚـﺪ‬ ‫ﺍﻟﺒﺎﺣﺜﻮﻥ ﳐﻄﻂ ﺗﺼﻨﻴﻒ ﻣﻘﻨﻊ ﳝﻜﻦ ﺃﻥ ﻳﻐﻄﻲ ﻛﻞ ﺃﻧﻮﺍﻉ ﺍﻵﻻﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ. ﻭﻋﻠﻰ ﻣﺮ ﺍﻟـﺴﻨﲔ،‬ ‫ﻛﺎﻥ ﻫﻨﺎﻙ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﶈﺎﻭﻻﺕ ﻹﳚﺎﺩ ﻃﺮﻳﻘﺔ ﻓﻌﺎﻟﺔ ﻭﻣﺮﳛﺔ ﻟﺘﺼﻨﻴﻒ ﺍﳊﺎﺳﺒﺎﺕ ﻣـﻦ ﺣﻴـﺚ‬ ‫ﺍﻟﻌﻤﺎﺭﺓ. ﻭﻋﻠﻰ ﺍﻟﺮﻏﻢ ﻣﻦ ﺃﻧﻪ ﻟﻴﺲ ﻫﻨﺎﻙ ﺗﺼﻨﻴﻒ ﻛﺎﻣﻞ، ﺇﻻ ﺃﻥ ﺍﻟﺘﺼﻨﻴﻒ ﺍﻷﻛﺜﺮ ﺍﻧﺘﺸﺎﺭﺍ ًﻫﺬﻩ‬ ‫ﺍﻷﻳﺎﻡ ﻫﻮ ﺫﻟﻚ ﺍﻟﺘﺼﻨﻴﻒ ﺍﻟﺬﻱ ﺍﻗﺘﺮﺣﻪ "ﻣﻴﺸﻴﻞ ﻓﻼﻳﻦ" ]‪ [Flynn‬ﰲ ﻋﺎﻡ 6691. ﻭﻳﺄﺧﺬ ﺗﺼﻨﻴﻒ‬ ‫ﻓﻼﻳﻦ ﺑﻌﲔ ﺍﻻﻋﺘﺒﺎ ﻋﺎﻣﻠﲔ ﺍﺛﻨﲔ ﻭﳘﺎ: ﻛﻤﻴﺔ ﺍﻟﺴﺮﻳﺎﻥ )ﺃﻭ ﺍﻟﺘﺪﻓﻖ( ﻟﻠﺘﻌﻠﻴﻤﺎﺕ ﻭ ﻛﻤﻴﺔ ﺍﻟﺴﺮﻳﺎﻥ‬ ‫ﻟﻠﺒﻴﺎﻧﺎﺕ ﺍﻟﱵ ﺗﺘﺪﻓﻖ ﻟﻠﻤﻌﺎﰿ. ﻭﻧﻮﺭﺩ ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ ﰲ ﺍﻟﻔﻘﺮﺓ ﺍﻵﺗﻴﺔ:‬ ‫1.2 ﺗﺼﻨﻴﻒ ﻓﻼﻳﻦ ]‪[Flynn’s Classification Scheme‬‬ ‫ﺇﻥ ﺗﺼﻨﻴﻒ ﻓﻼﻳﻦ ﻣﺒﲏ ﺃﺳﺎﺳﺎ ﻋﻠﻰ ﻛﻤﻴﺔ ﺗﺪﻓﻖ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻭﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺍﳌﻮﺟﻮﺩﺓ ﰲ ﺍﻵﻟـﺔ.‬ ‫ﹰ‬ ‫ﻭﻳﻘﺼﺪ ﺑﺎﻟﺘﺪﻓﻖ ﻫﻨﺎ ﻋﻠﻰ ﺃﻧﻪ ﺗﺘﺎﺑﻊ ﺃﻭ ﺗﺴﻠﺴﻞ ﻟﻌﻨﺎﺻﺮ )ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺃﻭ ﺍﳌﻌﻄﻴﺎﺕ( ﻛﻤﺎ ﻧﻔﺬﺕ ﺃﻭ‬ ‫ﺷﻐﻠﺖ ﺑﻮﺍﺳﻄﺔ ﺍﳌﻌﺎﰿ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﺗﻘﻮﻡ ﺑﻌﺾ ﺍﻵﻻﺕ ﺑﺘﻨﻔﻴـﺬ ﺗـﺪﻓﻖ ﻭﺍﺣـﺪ ﻣـﻦ‬ ‫ﺍﻟﺘﻌﻠﻴﻤﺎﺕ، ﺑﻴﻨﻤﺎ ﻳﺘﻢ ﺗﻨﻔﻴﺬ ﻋﺪﺓ ﺗﺪﻓﻘﺎﺕ ﰲ ﺁﻻﺕ ﺃﺧﺮﻯ. ﻭﺑﻨﻔﺲ ﺍﻟﻄﺮﻳﻘـﺔ ﻓـﺒﻌﺾ ﺍﻵﻻﺕ‬ ‫ﺗﺮﺟﻊ ﺗﺪﻓﻘﺎ ﻭﺍﺣﺪﹰﺍ ﻣﻦ ﺍﳌﻌﻄﻴﺎﺕ، ﻭﺁﻻﺕ ﺃﺧﺮﻯ ﺗﺮﺟﻊ ﺗﺪﻓﻘﺎﺕ ﻣﺘﻌﺪﺩﺓ. ﻭﻋﻠﻴﻪ ﻓﺈﻥ ﻓﻼﻳـﻦ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫92‬ ‫ﻳﻀﻊ ﺍﻵﻟﺔ ﰲ ﺗﺼﻨﻴﻒ ﻭﺍﺣﺪ ﻣﻦ ﺑﲔ ﺃﺭﺑﻌﺔ ﻭﺫﻟﻚ ﺍﻋﺘﻤﺎﺩﹰﺍ ﻋﻠﻰ ﻭﺟﻮﺩ ﺗﺪﻓﻖ ﻭﺍﺣﺪ ﺃﻭ ﺗﺪﻓﻘﺎﺕ‬ ‫ﻣﺘﻌﺪﺩﺓ.‬ ‫1.1.2 اﻟﺤﺎﺱﺒﺎت وﺣﻴﺪة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت ووﺣﻴﺪة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪SISD‬‬ ‫ﻭﻳﻨﺪﺭﺝ ﲢﺖ ﻫﺬﺍ ﺍﻟﺼﻨﻒ ﲨﻴﻊ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﺍﳌﻌﺘﺎﺩﺓ، ﻣﺜﻞ "‪ "Apple Macintosh‬ﻭ‬ ‫"‪ ."DEC VAX‬ﻭﺍﳊﺎﺳﺒﺎﺕ ﻣﻦ ﻫﺬﺍ ﺍﻟﺼﻨﻒ ُﺤﻀﺮ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ ﰒ ﺗﻘﻮﻡ ﺑﺘﻨﻔﻴـﺬﻫﺎ‬ ‫ﺗ‬ ‫ﻋﺎﺩﺓ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻗﻴﻢ ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﺸﺎﺭ ﺇﻟﻴﻬﺎ ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ. ﻭﻣﻦ ﰒ ﺗﻘﻮﻡ ﺑﺈﺣﻀﺎﺭ ﺗﻌﻠﻴﻤﺎﺕ ﺃﺧﺮﻯ‬ ‫ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ، ﻭﻫﻜﺬﺍ. ﻭ ﺃﻳﻀﺎ ﻳﻌﺮﻑ ﻫﺬﺍ ﺍﻟﺘﺼﻤﻴﻢ )‪ (SISD‬ﺑﺘﺼﻤﻴﻢ "ﻓـﻮﻥ ﻧﻴﻮﻣـﺎﻥ ‪Von‬‬ ‫َﹰ‬ ‫‪ "Neumann‬ﻭﺍﻟﺬﻱ ﻗﺎﻡ ﺑﻪ ﺍﻟﻌﺎﱂ "ﺟﻮﻥ ﻓﻮﻥ ﻧﻴﻮﻣﺎﻥ" ﰲ ﺃﻭﺍﺧﺮ ﺍﻷﺭﺑﻌﻴﻨﺎﺕ ﻭﺃﻭﺍﺋﻞ ﺍﳋﻤﺴﻴﻨﺎﺕ‬ ‫ﻣﻦ ﺍﻟﻘﺮﻥ ﺍﳌﺎﺿﻲ. ﻟﻘﺪ ﻣﺮﺕ ﺻﻨﺎﻋﺔ ﺍﳊﺎﺳﺒﺎﺕ ﻭﻋﻠﻰ ﻣﺪﻯ ﲬﺴﲔ ﻋﺎﻣﺎ ﺑﺎﻟﻜﺜﲑ ﻣﻦ ﺍﳋﱪﺓ ﰲ‬ ‫ﹰ‬ ‫ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ ﻓﺎﻟﻜﺜﲑ ﻣﻦ ﻟﻐﺎﺕ ﺍﻟﱪﳎﺔ )ﻣﺜﻞ ‪ Fortran‬ﻭ ‪ ،(C‬ﻭﺍﳌﺘﺮﲨﺎﺕ، ﻭﻧﻈﻢ ﺍﻟﺘـﺸﻐﻴﻞ،‬ ‫ﻭﻃﺮﻕ ﺍﻟﱪﳎﺔ، ﻛﻠﻬﺎ ﺗﻌﺘﻤﺪ ﺃﺳﺎﺳﺎ ﻋﻠﻰ ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ.‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫03‬ ‫اﻟﺸﻜﻞ )2-2(: ‪SISD‬‬ ‫ﺇﻥ ﻛﻞ ﺃﻧﻮﺍﻉ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻣﻮﺟﻮﺩﺓ ﰲ ﺗﺼﻨﻴﻒ ‪ ،SISD‬ﺑﻞ ﻭﺃﻛﺜﺮ ﻣـﻦ ﺫﻟـﻚ؛‬ ‫ﻓﺎﻟﺒﺎﺣﺜﻮﻥ ﻭﺿﻌﻮﺍ ﺑﻌﺾ ﺃﻧﻮﺍﻉ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺿﻤﻦ ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ، ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜـﺎﻝ،‬ ‫ﺍﳌﻌﺎﺎﺕ ﺍﻟﺸﻌﺎﻋﻴﺔ )‪ (Vector‬ﻣﺜﻞ 1-‪ Cray‬ﺗﻨﺘﻤﻲ ﺇﱃ ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ ﺑﺎﻟﺮﻏﻢ ﻣﻦ ﺃﻥ ﺗﻌﻠﻴﻤـﺎﺕ‬ ‫ﺍﻟﺘﺸﻐﻴﻞ ﺗﻌﻤﻞ ﻋﻠﻰ ﻗﻴﻢ ﺑﻴﺎﻧﺎﺕ ﺷﻌﺎﻋﻴﺔ ﺇﻻ ﺃﻥ ﻟﺪﻳﻬﺎ ﺩﻓﻖ ﻭﺍﺣﺪ ﻣﻦ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ.‬ ‫ٌَ‬ ‫2.1.2 اﻟﺤﺎﺳﺒﺎت وﺡﻴﺪة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت وﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪SIMD‬‬ ‫ﻭﻳﺘﻀﻤﻦ ﻫﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﱵ ﲢﺘﻮﻱ ﻋﻠﻰ ﻭﺣﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﻭﺍﺣﺪﺓ ﺗﺼﺪﺭ ﺃﻭﺍﻣـﺮ‬ ‫ﺇﱃ ﻋﺪﺓ ﻋﻨﺎﺻﺮ ﻣﻌﺎﳉﺔ )‪ .(PEs‬ﻭﻷﻥ ﻛﻞ ﻋﻨﺼﺮ ﻣﻌﺎﳉﺔ )‪ُ (PE‬ﺸﻐﻞ ﺑﻴﺎﻧﺎﺗﻪ ﺍﶈﻠﻴﺔ ﺍﳋﺎﺻـﺔ‬ ‫ﻳّ‬ ‫ﻓﻬﻨﺎﻙ ﺗﺪﻓﻘﺎﺕ ﻣﺘﻌﺪﺩﺓ ﻟﻠﺒﻴﺎﻧﺎﺕ. ﻭﻋﺎﺩﺓ ﻓﺈﻥ ﻭﺣﺪﺓ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺗﺼﺪﺭ ﻧﻔﺲ ﺍﻷﻣـﺮ ﺇﱃ ﲨﻴـﻊ‬ ‫ﻋﻨﺎﺻﺮ ﺍﳌﻌﺎﳉﺔ . ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﲨﻴﻊ ﻋﻨﺎﺻﺮ ﺍﳌﻌﺎﳉﺔ ﺗﻨﻔﺬ ﺗﻌﻠﻴﻤﺔ ﺍﳉﻤﻊ ‪ ،ADD‬ﻭﺑﻌﺪ ﺫﻟﻚ‬ ‫ﺗﻨﻔﺬ ﺗﻌﻠﻴﻤﺔ ﺍﻟﺘﺨﺰﻳﻦ ‪ ،STORE‬ﻭﻫﻜﺬﺍ.‬ ‫ﻳﺘﻤﻴﺰ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺑﻮﺟﻮﺩ ﻭﺣﺪﺓ ﲢﻜﻢ ﻣﺮﻛﺰﻳﺔ . ﻭﺗـﺸﺮﻑ ﻋﻠـﻰ ﻋﻨﺎﺻـﺮ‬ ‫ﺍﳌﻌﺎﳉﺔ ﺍﳌﺨﺘﻠﻔﺔ ﺗﻌﻠﻴﻤﺔ ﻭﺍﺣﺪﺓ ﻣﻦ ﻭﺣﺪﺓ ﺍﻟﺘﺤﻜﻢ ﻭﺗﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﻫﺬﻩ ﺍﻟﺘﻌﻠﻴﻤﺔ ﺑﺸﻜﻞ ﻣﺘـﺰﺍﻣﻦ‬ ‫ﻋﻠﻰ ﻣﻌﺎﻣﻼﺕ ﳐﺘﻠﻔﺔ. ﺗﻜﻮﻥ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻣﺘﺰﺍﻣﻨﺔ، ﻭﻏﺎﻟﺒﺎ ﻣﺎ ﲤﻠﻚ ﺫﺍﻛﺮﺓ ﻣـﺸﺘﺮﻛﺔ ﺑـﲔ‬ ‫ﹰ‬ ‫ﺍﻟﻮﺣﺪﺍﺕ. ﻭﻟﺘﺴﻬﻴﻞ ﻋﻤﻠﻴﺔ ﺍﻟﻮﻟﻮﺝ ﺍﳌﺘﻮﺍﺯﻱ ﺇﱃ ﺍﻟﺬﺍﻛﺮﺓ ُﻳﻠﺠﺄ ﺇﱃ ﺗﻘﺴﻴﻤﻬﺎ ﺇﱃ ﺑﻨﻮﻙ ﳑﺎ ﻳﺴﻤﺢ‬ ‫ﹸ‬ ‫ﺑﺎﺳﺘﺨﻼﺹ ﻋﺪﺓ ﻣﻌﺎﻣﻼﺕ ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ، ﺗﺘﺒﺎﺩﻝ ﻭﺣﺪﺍﺕ ﻣﻌﺎﳉﺔ ﺍﳌﻌﻄﻴﺎﺕ ﻋـﻦ ﻃﺮﻳـﻖ‬ ‫ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﻭﻳﺘﻢ ﺍﻻﺗﺼﺎﻝ ﺑﲔ ﻭﺣﺪﺍﺕ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺨﺘﻠﻔﺔ ﻭﺑﻨﻮﻙ ﺍﻟﺬﺍﻛﺮﺓ ﻋـﱪ ﺷـﺒﻜﺔ‬ ‫ﺍﻟﺮﺑﻂ. ﻭﻧﻈﺮﹰﺍ ﻟﺘﻨﻔﻴﺬ ﻧﻔﺲ ﺍﻟﻌﻤﻠﻴﺔ ﻋﻠﻰ ﺍﻟﻮﺣﺪﺍﺕ ﺍﳌﺨﺘﻠﻔﺔ ﻓﻤﻦ ﺍﳌﻤﻜﻦ ﺍﻋﺘﺒـﺎﺭ ﺍﳊﺎﺳـﺒﺎﺕ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫13‬ ‫‪ SIMD‬ﻛﺤﺎﺳﻮﺏ ﻭﺣﻴﺪ ﺍﳌﻌﺎﰿ ﺍﻟﺬﻱ ﻳﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﻋﻠﻰ ﺃﺟﺰﺍﺀ ﳐﺘﻠﻔﺔ ﻣﻦ ﺍﳌﻌﻄﻴﺎﺕ.‬ ‫ﻭﻳﻼﺋﻢ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﻟﻌﻤﻠﻴﺎﺕ ﻋﻠﻰ ﺍﻷﺷﻌﺔ ﻭﻋﻠﻰ ﺍﳌﺼﻔﻮﻓﺎﺕ ﻭﻏﺎﻟﺒﺎ ﻣﺎ ﻳـﺴﺘﺨﺪﻡ‬ ‫ﹰ‬ ‫ﻣﻦ ﺃﺟﻞ ﻋﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺏ ﺍﻟﻌﻠﻤﻲ. ﻭﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)3-2( ﻫﺬﺍ ﺍﻟﺼﻨﻒ ﺣﻴﺚ ﺗﺆﺧﺬ ﺍﳌﻌﻄﻴﺎﺕ‬ ‫ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ ﻭﻳﻨﻔﺬ ﻋﻠﻴﻬﺎ ﺃﻣﺮ ﻭﺍﺣﺪ، ﻭﻣﺜﺎﻝ ﻋﻠﻰ ﺫﻟﻚ ﻋﻤﻠﻴﺔ ﺿﺮﺏ ﺍﻟﻌﺪﺩ ‪ B‬ﺑﺎﻟـﺸﻌﺎﻉ )‪A(I‬‬ ‫ﺣﻴﺚ ‪ ،I=0,…,N‬ﻭﺍﻟﻨﺎﺗﺞ ﻳﻮﺿﻊ ﰲ )‪ ،C(I‬ﻓﺘﺤﺼﻞ ﻋﻤﻠﻴﺔ ﺿﺮﺏ ‪ B‬ﲜﻤﻴﻊ ﻋﻨﺎﺻﺮ ﺍﻟـﺸﻌﺎﻉ‬ ‫)‪ A(I‬ﺑﻌﻤﻠﻴﺔ ﻭﺍﺪﺓ ﻋﻠﻰ ﻛﻞ ﺍﳌﻌﺎﻣﻼﺕ.‬ ‫اﻟﺸﻜﻞ )3-2(: ‪SIMD‬‬ ‫ﺗﺴﺘﺨﺪﻡ ﰲ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﺁﻻﻑ ﺍﳌﻌﺎﳉﺎﺕ ، ﻭﺗﻜﻮﻥ ﻋﺎﺩﺓ ﻣﺘﻮﺳﻄﺔ ﺍﻷﺩﺍﺀ ﺃﻭ ﺑـﺴﻴﻄﺔ‬ ‫ﺟﺪﹰﺍ ، ﻭﺟﻮﺩﻬﺗﺎ ﻭﺃﺩﺍﺋﻬﺎ ﺍﻟﻌﺎﱄ ﻧﺎﺗﺞ ﻋﻦ ﺍﻟﻌﺪﺩ ﺍﻟﻜﺒﲑ ﻟﻠﻤﻌﺎﳉﺎﺕ ﺍﳌﺴﺘﺨﺪﻣﺔ .‬ ‫ﲢﺘﻞ ﺍﳊﺎﺳﺒﺎﺕ ﻣﻦ ﺍﻟﻨﻮﻉ ‪ SIMD‬ﻣﻮﻗﻌﺎ ﺑﺎﺭﺯﹰﺍ ﰲ ﺗﺎﺭﻳﺦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻓﺄﻭﻝ ﺣﺎﺳﺐ‬ ‫ﹰ‬ ‫ﻣﺘﻮﺍﺯﻱ ﰎ ﺗﺸﻴﻴﺪﻩ ﻛﺎﻥ ﻣﻦ ﻫﺬﺍ ﺍﻟﻨﻮﻉ )ﻭﻫﻮ ‪ ،(ILLIAC IV‬ﻭﻟﻜﻦ ﺑﺴﺒﺐ ﺃﻥ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ‬ ‫ﺍﻵﻻﺕ ﻳﺒﲎ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻣﻜﻮﻧﺎﺕ ﳐﺼﺼﺔ ﻟﺬﺍ ﻓﺈﻧﻪ ﻗﻞ ﺍﻹﻗﺒﺎﻝ ﻋﻠﻴﻬﺎ ﰲ ﺍﻟﺴﻨﻮﺍﺕ ﺍﻟﻘﻠﻴﻠﺔ ﺍﳌﺎﺿﻴﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫23‬ ‫ﻳﻌﺘﱪ ﺍﳊﺎﺳﺐ )‪ (ILLIAC IV‬ﻣﺜﺎﻻ ﺟﻴﺪﹰﺍ ﻟﺘﻮﺿﻴﺢ ﺍﻵﻻﺕ ﺍﻟﱵ ﺗﻨﺘﻤﻲ ﺇﱃ ﻫﺬﺍ ﺍﻟﻨـﻮﻉ .‬ ‫ﹰ‬ ‫ﻓﻬﻨﺎﻟﻚ ﻭﺣﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﻭﺍﺣﺪﺓ ﺗﺼﺪﺭ ﻧﻔﺲ ﺍﻷﻣﺮ ﺇﱃ ﲨﻴﻊ ﻋﻨﺎﺻﺮ ﺍﳌﻌﺎﳉﺔ ﺍﻷﺭﺑﻊ ﻭﺳـﺘﻮﻥ.‬ ‫ﻭﻛﻞ ﻋﻨﺼﺮ ﻣﻌﺎﳉﺔ ﻟﻪ ﺫﺍﻛﺮﺓ ﻣﻜﻮﻧﺔ ﻣﻦ ٢ ﻛﻴﻠﻮ ﻣﻦ ﺍﻟﻜﻠﻤﺎﺕ )‪ (2K words‬ﻭﺫﻟﻚ ﻟﺘﺤﻤﻴـﻞ‬ ‫ﻭﲣﺰﻳﻦ ﻭﻣﻌﺎﳉﺔ ﺍﳌﻌﻄﻴﺎﺕ. ﻭﺗﺮﺗﺒﻂ ﺍﻷﺭﺑﻊ ﻭﺳﺘﻮﻥ ﻋﻨﺼﺮ ﻣﻌﺎﳉﺔ ﻣﻌﺎ ﺑﺸﺒﻜﺔ ﺛﻨﺎﺋﻴﺔ ﺍﻷﺑﻌﺎﺩ ﻓﻴﻬﺎ‬ ‫ﹰ‬ ‫ﲦﺎﻧﻴﺔ ﻋﻨﺎﺻﺮ ﻣﻌﺎﳉﺔ ﰲ ﻛﻞ ﺟﺎﻧﺐ، ﻭﳝﻜﻦ ﻟﻠﻌﻨﺎﺻﺮ ﺍﳌﺘﺠﺎﻭﺭﺓ ﺇﺭﺳﺎﻝ ﻭﺍﺳـﺘﻘﺒﺎﻝ ﺍﻟﺮﺳـﺎﺋﻞ.‬ ‫ﻭﺗﻠﺘﻒ ﺍﻻﺭﺗﺒﺎﻃﺎﺕ ﻟﻠﻤﻌﺎﳉﺎﺕ ﰲ ﺍﻟﻄﺮﻑ ﺍﻟﻌﻠﻮﻱ ﻟﺘﺮﺗﺒﻂ ﻣﻊ ﺍﳌﻌﺎﳉﺎﺕ ﰲ ﺍﻟﻄﺮﻑ ﺍﻟـﺴﻔﻠﻲ،‬ ‫ﻭﻛﺬﻟﻚ ﺗﻠﺘﻒ ﺍﻻﺭﺗﺒﺎﻃﺎﺕ ﻟﻠﻤﻌﺎﳉﺎﺕ ﰲ ﺍﻟﻄﺮﻑ ﺍﻷﻳﺴﺮ ﻟﺘﺮﺗﺒﻂ ﻣﻊ ﺍﳌﻌﺎﳉـﺎﺕ ﰲ ﺍﻟﻄـﺮﻑ‬ ‫ﺍﻷﳝﻦ. )ﺍﻻﻟﺘﻔﺎﻓﺎﺕ ﻇﺎﻫﺮﺓ ﰲ ﺍﻟﺸﻜﻞ)5-2((.‬ ‫اﻟﺸﻜﻞ)4-2(: ﻳﻮﺿﺢ اﻟﺠﻬﺎز ‪ ILLIAC IV‬ﻣﻊ وﺡﺪة ﺕﻌﻠﻴﻤﺎت واﺡﺪة )‪ (IU‬و ٤٦ ﻋﻨﺼﺮ ﻣﻌﺎﻟﺠﺔ ‪PEs‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫33‬ ‫1‪PE‬‬ ‫9‪PE‬‬ ‫7‪PE‬‬ ‫…‬ ‫51‪PE‬‬ ‫0‪P E‬‬ ‫8‪PE‬‬ ‫8- ‪i‬‬ ‫…‬ ‫…‬ ‫1+ ‪i‬‬ ‫‪i‬‬ ‫1- ‪i‬‬ ‫8+ ‪i‬‬ ‫36‪PE‬‬ ‫…‬ ‫75‪PE‬‬ ‫65‪PE‬‬ ‫اﻟﺸﻜﻞ )5-2(: ﻳﻮﺿﺢ ﻃﺮﻳﻘﺔ اﻟﺘﻔﺎف اﻻرﺕﺒﺎﻃﺎت ﺏﻴﻦ اﻟﻤﻌﺎﻟﺠﺎت‬ ‫ﻭﺣﻴﺚ ﺃﻥ ﻛﻞ ﻋﻨﺼﺮ ﻣﻌﺎﳉﺔ )‪ (PE‬ﳝﻜﻨﻪ ﺃﻥ ﻳﺮﺳﻞ ﺍﻟﺮﺳﺎﺋﻞ ﰲ ﺃﺭﺑﻊ ﺍﲡﺎﻫﺎﺕ ﻓﺎﻟﺮﻭﺍﺑﻂ‬ ‫ﺗﻌﻨﻮﻥ ﺃﻭ ﺗﻌﺮﻑ ﻏﺎﻟﺒﺎ ﺑﺎﲡﺎﻫﺎﺕ ﺍﻟﺒﻮﺻﻠﺔ )ﴰﺎﻝ، ﺷﺮﻕ، ﺟﻨﻮﺏ،ﻏﺮﺏ(. ﻭﻳﺪﻋﻰ ﻫﺬﺍ ﺃﻳﻀﺎ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﺑﺸﺒﻜﺔ ﺍﺗﺼﺎﻝ ‪ .NEWS‬ﻓﻤﻊ ﺗﻌﻠﻴﻤﺔ ﻭﺍﺣﺪﺓ ﳝﻜﻦ ﻟﻌﻨﺎﺻﺮ ﺍﳌﻌﺎﳉﺔ ﺍﻷﺭﺑﻊ ﻭﺳﺘﻮﻥ ﺃﻥ ﲤﺮﺭ‬ ‫ﺍﻟﺮﺳﺎﻟﺔ ﰲ ﺍﲡﺎﻩ ﻭﺍﺣﺪ، ﻛﺎﻟﺸﻤﺎﻝ ﻣﺜﻼ.‬ ‫ﹰ‬ ‫اﻟﺸﻜﻞ )6-2(: یﻮﺿﺢ اﻟﻌﻨﻮﻧﺔ ﺑﺎﺳﺘﺨﺪام ﻃﺮیﻘﺔ ﺵﺒﻜﺔ‬ ‫‪NEWS‬‬ ‫ﱂ ﻳﺼﻤﻢ )‪ (ILLIAC IV‬ﻟﻴﻜﻮﻥ ﺣﺎﺳﺒﺎ ﻋﺎﻡ ﺍﻟﻐﺮﺽ، ﻭﺇﳕﺎ ﺻﻤﻢ ﻟﻐﺮﺽ ﺧﺎﺹ ﻭﻫﻮ ﺣﻞ‬ ‫ﹰ‬ ‫ﺍﳌﻌﺎﺩﻻﺕ ﺍﻟﺘﻔﺎﺿﻠﻴﺔ ﺍﳉﺰﺋﻴﺔ. ﻣﺜﻼ ﻳﺴﺘﺨﺪﻡ )‪ (ILLIAC IV‬ﻟﻠﺘﻨﺒﺆ ﺑﺎﻟﻄﻘﺲ، ﻭﺍﻟﺘﻨﺒﺆ ﺑـﺎﻟﻄﻘﺲ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫43‬ ‫ﻳﺴﺘﻠﺰﻡ ﺑﻴﺎﻧﺎﺕ ﻛﺜﲑﺓ ﰲ ﻓﻀﺎﺀ ﺛﻼﺛﻲ ﺍﻷﺑﻌﺎﺩ. ﻭﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﻓﺎﳊﻞ ﺍﻟﻨﻤﻮﺫﺟﻲ ﻫـﻮ ﺑﺘﻘـﺴﻴﻢ‬ ‫ﺍﻟﻔﻀﺎﺀ ﺇﱃ ٤٦ ﻗﺴﻤﺎ، ﻭﻭﺿﻊ ﻗﺴﻢ ﻭﺍﺣﺪ ﰲ ﻛﻞ ﻣﻌﺎﰿ )‪ .(PE‬ﻓﺎﳊﻠﻮﻝ ﻟﻸﺭﺑﻊ ﻭﺳﺘﲔ ﻗﺴﻤﺎ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﰲ ﻫﺬﻩ ﺍﻟﻄﺮﻳﻘﺔ ﲢﺴﺐ ﺑﺎﻟﺘﻮﺍﺯﻱ. ﻭﻋﻨﺪﻣﺎ ﳛﺘﺎﺝ ﻋﻨﺼﺮ ﺍﳌﻌﺎﳉﺔ ﺇﱃ ﺑﻴﺎﻧﺎﺕ ﻣﻦ ﺍﻟﻘﺴﻢ ﺍﺠﻤﻟﺎﻭﺭ‬ ‫ﻓﺎﻻﺗﺼﺎﻝ ﳚﺐ ﺃﻥ ﻳﺒﺪﺃ ﻋﻠﻰ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ.‬ ‫ﻣﻦ ﺍﻵﻻﺕ ﺍﳍﺎﻣﺔ ﺍﻟﱵ ﺗﺘﺒـﻊ ﻟﺘـﺼﻨﻴﻒ ‪ SIMD‬ﻫـﻲ:‬ ‫‪ Goodyear MPP‬ﻭ 2-‪.MasPar MP‬‬ ‫‪ILLIAC IV‬‬ ‫ﻭ‬ ‫‪ICL DAP‬‬ ‫ﻭ‬ ‫3.1.2 اﻟﺤﺎﺳﺒﺎت ﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت ووﺡﻴﺪة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪MISD‬‬ ‫ﺣﻴﺚ ﻳﺘﻢ ﰲ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﺗﻨﻔﻴﺬ ﻋﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﳐﺘﻠﻔﺔ ﻋﻠﻰ ﻣﻌﺎﻣﻞ ﻭﺍﺣﺪ ﺧﻼﻝ ﺍﻟﺪﻭﺭﺓ ﺍﻟﺰﻣﻨﻴﺔ‬ ‫ﻟﻠﺤﺎﺳﺐ. ﻳﺘﻤﻴﺰ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺑﻮﺟﻮﺩ ﻋﺪﺩ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ ﺍﻟﱵ ﺗﻌﻤﻞ ﺑﺸﻜﻞ ﻣﺴﺘﻘﻞ‬ ‫ﻋﻦ ﺑﻌﻀﻬﺎ ﺍﻟﺒﻌﺾ. ﻳﺘﻀﻤﻦ ﻛﻞ ﻣﻌﺎﰿ ﻭﺣﺪﺓ ﲢﻜﻢ ﺧﺎﺻﺔ ﺑﻪ ﺗﺴﺎﻋﺪﻩ ﻋﻠﻰ ﺗﻨﻔﻴـﺬ ﺍﳌﻬـﺎﻡ‬ ‫ﺍﳉﺰﺋﻴﺔ ﺍﳌﻮﻛﻠﺔ ﺇﻟﻴﻪ. ﻳﻮﺟﺪ ﺍﻟﻘﻠﻴﻞ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻦ ﻧﻮﻉ ‪ MISD‬ﻭﺗﻜﺎﺩ ﺗﻜﻮﻥ ﺃﳘﻴـﺔ‬ ‫ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻘﺘﺼﺮﺓ ﻋﻠﻰ ﺗﺼﻨﻴﻒ)‪ (Flynn‬ﻷﻧﻪ ﻳﺘﻼﺀﻡ ﻣﻊ ﻣﺒﺪﺃ ﺍﻟﺘﺼﻨﻴﻒ. ﻭﺃﻥ ‪ MISD‬ﻫـﻲ‬ ‫ﻗﻠﻴﻠﺔ ﺍﻻﺳﺘﻌﻤﺎﻝ ﻓﻬﻲ ﺗﻌﺘﻤﺪ ﻣﺒﺪﺃ ﺍﻟﻌﻤﻞ ﺍﻟﺘﺴﻠﺴﻠﻲ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫53‬ ‫اﻟﺸﻜﻞ )7-2(: ‪MISD‬‬ ‫4.1.2 اﻟﺤﺎﺳﺒﺎت ﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﺘﻌﻠﻴﻤﺎت وﻣﺘﻌﺪدة ﺕﺪﻓﻖ اﻟﻤﻌﻄﻴﺎت ‪MIMD‬‬ ‫ﻳﺘﻢ ﰲ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﺗﻨﻔﻴﺬ ﻋﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﻋﻠﻰ ﻣﻌﺎﻣﻼﺕ ﳐﺘﻠﻔﺔ ﺧـﻼﻝ ﺍﻟـﺪﻭﺭﺓ ﺍﻟﺰﻣﻨﻴـﺔ‬ ‫ﻟﻠﺤﺎﺳﺐ . ﻳﺘﻤﻴﺰ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﺎﺳﺒﺎﺕ ﺑﻮﺟﻮﺩ ﻋﺪﺩ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ ﺍﻟﱵ ﺗﻌﻤـﻞ ﺑـﺸﻜﻞ‬ ‫ﻣﺴﺘﻘﻞ ﻋﻦ ﺑﻌﻀﻬﺎ ﺍﻟﺒﻌﺾ. ﻳﺘﻀﻤﻦ ﻛﻞ ﻣﻌﺎﰿ ﻭﺣﺪﺓ ﲢﻜﻢ ﺧﺎﺻﺔ ﺑﻪ ﺗﺴﺎﻋﺪﻩ ﻋﻠـﻰ ﺗﻨﻔﻴـﺬ‬ ‫ﺍﳌﻬﺎﻡ ﺍﳉﺰﺋﻴﺔ ﺍﳌﻮﻛﻠﺔ ﺇﻟﻴﻪ. ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻏﲑ ﻣﺘﺰﺍﻣﻨﺔ ﻭﺑﺎﻟﺘﺎﱄ ﻓﺎﳊﻮﺍﺩﺙ ﺍﻟﱵ ﲡﺮﻱ ﻋﻠﻰ ﻣﻌﺎﰿ‬ ‫ﻣﺎ ﻻ ﺗﺮﺗﺒﻂ ﺑﺎﳊﻮﺍﺩﺙ ﺍﻟﱵ ﲡﺮﻱ ﻋﻠﻰ ﺍﳌﻌﺎﳉﺎﺕ ﺍﻷﺧﺮﻯ . ﳝﻜﻦ ﻓﺮﺽ ﻧﻮﻉ ﻣﻦ ﺍﻟﺘﺰﺍﻣﻦ ﺑﲔ‬ ‫ﻫﺬﻩ ﺍﳌﻌﺎﳉﺎﺕ ﻓﻴﻤﺎ ﺩﻋﺖ ﺍﻟﻀﺮﻭﺭﺍﺕ ﺍﻟﱪﳎﻴﺔ ﻟﺬﻟﻚ ﻭﻳﺘﻢ ﻫﺬﺍ ﺍﻟﺘـﺰﺍﻣﻦ ﺑﺎﺳـﺘﺨﺪﺍﻡ ﺑﻌـﺾ‬ ‫ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺍﻷﻭﻟﻴﺔ ﺍﳌﺨﺼﺼﺔ ﻟﻠﺘﺰﺍﻣﻦ ﺃﻭ ﻋﻦ ﻃﺮﻳﻖ ﺍﻟﻌﺘﺎﺩ ﻭﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺃﻳـﻀﺎ ﻋـﻦ ﻃﺮﻳـﻖ‬ ‫ﹰ‬ ‫ﺍﻟﱪﳎﻴﺎﺕ ﻭﻧﻈﻢ ﺍﻟﺘﺸﻐﻴﻞ . ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)8-2( ﺃﻥ ﻛﻞ ﻣﻌﺎﻣﻞ ﻳﻌﺎﰿ ﻣﻦ ﻗﺒﻞ ﻣﻌﺎﰿ ﻣﺎ ﺑﺄﻣﺮ ﻣﺎ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫63‬ ‫، ﻭﰲ ﺍﻟﻮﻗﺖ ﺫﺍﺗﻪ ﺗﻌﺎﰿ ﲨﻴﻊ ﺍﳌﻌﺎﻣﻼﺕ ﺣﺴﺐ ﻛﺎﻓﺔ ﺍﻷﻭﺍﻣﺮ ﻭﻧﺘﻴﺠﺔ ﻫﺬﻩ ﺍﳌﻌﺎﳉﺔ ﲣـﺰﻥ ﰲ‬ ‫ﺍﻟﺬﺍﻛﺮﺓ. ﻧﻼﺣﻆ ﺃﻧﻪ ﻋﻨﺪ ﺍﳌﻌﺎﳉﺔ ﻻ ﻳﻨﺘﻈﺮ ﺃﻱ ﻣﻌﺎﰿ ﻧﺘﻴﺠﺔ ﻣﻦ ﻣﻌﺎﰿ ﺁﺧﺮ ﻷﻧﻪ ﺗﻌﻤـﻞ ﲨﻴـﻊ‬ ‫ﺍﳌﻌﺎﳉﺎﺕ ﺑﺸﻜﻞ ﻏﲑ ﻣﺘﺰﺍﻣﻦ ﻭﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ .‬ ‫ﺗﺴﺘﺨﺪﻡ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﰲ ﺣﺎﺳﺒﺎﺕ ‪ MIMD‬ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺣﻴﺚ ﻳﻘﺴﻢ‬ ‫ﺍﻟﱪﻧﺎﻣﺞ ﺇﱃ ﻣﻬﻤﺎﺕ ﺟﺰﺋﻴﺔ ﻣﺴﺘﻘﻠﺔ ﺟﺰﺋﻴﺎ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ، ﻭﺗﻨﻔﺬ ﻛﻞ ﻣﻬﻤﺔ ﻋﻠـﻰ ﻣﻌـﺎﰿ ﻣـﻦ‬ ‫ﹰ‬ ‫ﻣﻌﺎﳉﺎﺕ ﺍﳊﺎﺳﺐ . ﺗﺘﻜﻮﻥ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻣﻦ ﻋﺸﺮﺍﺕ ﺍﳌﻌﺎﳉﺎﺕ ﻭﻫﻲ ﺍﳊﺎﺳـﺒﺎﺕ ﺍﻷﻛﺜـﺮ‬ ‫ﻋﻤﻮﻣﻴﺔ ﰲ ﻭﻗﺘﻨﺎ ﺍﳊﺎﺿﺮ ﺣﻴﺚ ﳝﻜﻦ ﺍﺳﺘﺜﻤﺎﺭﻫﺎ ﻣﻦ ﺃﺟﻞ ﺗﻄﺒﻴﻘﺎﺕ ﳐﺘﻠﻔﺔ ﻭﻣﺘﻨﻮﻋﺔ .‬ ‫ﻭﻳﻮﺟﺪ ﳍﺬﺍ ﺍﻟﺘﺼﻨﻴﻒ ﻓﺌﺘﲔ ﻓﺮﻋﻴﺘﲔ ﻫﺎﻣﺘﲔ ﻭﳘﺎ:‬ ‫)‪ (a‬اﻟﺬاآﺮة اﻟﻤﺸﺘﺮآﺔ )‪.(Shared memory‬‬ ‫)‪ (b‬ﺕﻤﺮیﺮ اﻟﺮﺳﺎﺋﻞ )‪.(message passing‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫73‬ ‫اﻟﺸﻜﻞ )8-2(: ‪MIMD‬‬ ‫‪ MIMD Shared Memory‬اﻟﺬاآﺮة اﻟﻤﺸﺘﺮآﺔ ‪2.1.4-a‬‬ ‫ﰲ ﻫﺬﻩ ﺍﻟﻔﺌﺔ ﺍﻟﻔﺮﻋﻴﺔ ﻓﺈﻥ ﺃﻱ ﻣﻌﺎﰿ ﻳﺘﻀﻤﻦ ﻭﺣﺪﺓ ﺗﻌﻠﻴﻤﺎﺕ ﻭ ﻭﺣﺪﺓ ﺣﺴﺎﺑﻴﺔ ﲤﻜﻨﻪ ﻣـﻦ ﺃﻥ‬ ‫ﻳﻘﺮﺃ ﻣﻦ ﺃﻭ ﻳﻜﺘﺐ ﰲ ﺫﺍﻛﺮﺓ ﻣﺸﺘﺮﻛﺔ.‬ ‫اﻟﺸﻜﻞ )9-2(: ﻥﻤﻮذج اﻟﺬاآﺮة اﻟﻤﺸﺘﺮآﺔ )‪(MIMD Shared Memory‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫83‬ ‫ﰲ ﻫﺬﺍ ﺍﻟﻨﻤﻮﺫﺝ ﺗﺮﺗﺒﻂ ﺍﳌﻌﺎﳉﺎﺕ ﻣﻊ ﻭﺣﺪﺍﺕ ﺍﻟﺬﺍﻛﺮﺓ ﺑﻮﺍﺳﻄﺔ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ، ﻭﺍﻟﱵ ﳝﻜﻦ‬ ‫ﺃﻥ ﺗﺘﺨﺬ ﻋﺪﺓ ﺃﺷﻜﺎﻝ ﺗﺒﻌﺎ ﻟﻨﻮﻉ ﺍﻵﻟﺔ. ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ ﳝﻜﻦ ﺃﻥ ﺗﺜﺒﺖ ﻋﻠﻰ ﺷﻜﻞ ﺣﻠﻘﺔ ﺃﻭ ﺷﺒﻜﺔ‬ ‫ﹰ‬ ‫)‪.(Mesh‬‬ ‫ﳝﻜﻦ ﻟﻨﺎ ﺗﺸﺒﻴﻪ ﻫﺬﺍ ﺍﻟﻨﻤﻮﺫﺝ ﺑﻠﺠﻨﺔ ﺗﺴﺘﺨﺪﻡ ﺳﺒﻮﺭﺓ ﺭﺋﻴﺴﻴﺔ ﻣﺸﺘﺮﻛﺔ ﻓﻴﻤﺎ ﺑﻴﻨـﻬﺎ ﻟﻜـﻞ‬ ‫ﺍﻻﺗﺼﺎﻻﺕ، ﻓﺄﻱ ﻋﻀﻮ ﰲ ﺍﻟﻠﺠﻨﺔ ﳝﻜﻦ ﻟﻪ ﺃﻥ ﻳﻘﺮﺃ ﺃﻱ ﺟﺰﺀ ﻣﻦ ﺍﻟﺴﺒﻮﺭﺓ، ﻭﻟﻜﻦ ﳝﻜﻦ ﻟﺸﺨﺺ‬ ‫ﻭﺍﺣﺪ ﻓﻘﻂ ﺃﻥ ﻳﻜﺘﺐ ﻠﻰ ﺟﺰﺀ ﻣﻌﲔ ﻣﻦ ﺍﻟﺴﺒﻮﺭﺓ. ﻓﻔﻲ ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﻫﺬﺍ ﺭﲟـﺎ‬ ‫ﳛﺼﻞ ﺗﻌﺎﺭﺽ ﺃﻭ ﺗﻀﺎﺭﺏ ﰲ ﺍﻟﺬﺍﻛﺮﺓ ﻋﻨﺪﻣﺎ ﳛﺎﻭﻝ ﻣﻌﺎﳉﺎﻥ ﺍﻟﻜﺘﺎﺑﺔ ﰲ ﻧﻔﺲ ﺍﳉﺰﺀ ﻣﻦ ﺍﻟﺬﺍﻛﺮﺓ‬ ‫ﰲ ﻧﻔﺲ ﺍﻟﻮﻗﺖ، ﻭﺃﻳﻀﺎ ﺭﲟﺎ ﺗﺘﺪﺍﺧﻞ ﺍﳌﻌﺎﳉﺎﺕ ﻣﻊ ﺑﻌﻀﻬﺎ ﻋﻨﺪ ﺍﻟﻜﺘﺎﺑﺔ ﰲ ﻧﻔﺲ ﺧﻠﻴﺔ ﺍﻟﺬﺍﻛﺮﺓ‬ ‫ﹰ‬ ‫ﺍﳌﺸﺘﺮﻛﺔ ﳑﺎ ﻳﺘﺴﺒﺐ ﰲ ﺇﻳﻘﺎﻑ ﻭﻓﺸﻞ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ. ﻭﻟﻜﻲ ﻻ ﻳﺼﺒﺢ ﺍﻟﺘﺪﺍﺧﻞ ﻣﺸﻜﻠﺔ ﳚﺐ‬ ‫ﺃﻥ ﺗﺰﻭﺩ ﺍﻵﻟﺔ ﺑﺄﻗﻔﺎﻝ ﺃﻭ ﺃﻱ ﺁﻟﻴﺔ ﻟﻠﺘﺰﺍﻣﻦ ﻭﺫﻟﻚ ﻟﻀﻤﺎﻥ ﻭﺟﻮﺩ ﻣﻌﺎﰿ ﻭﺍﺣﺪ ﻓﻘﻂ ﻳﺘﻌﺎﻣﻞ ﻣـﻊ‬ ‫ﺧﻠﻴﺔ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﰲ ﺍﻟﻮﻗﺖ ﺍﻟﻮﺍﺣﺪ.‬ ‫ﻭﺑﺎﺧﺘﺼﺎﺭ ﺳﻮﻑ ﻧﻌﺮﺽ ﻟﺜﻼﺙ ﺁﻻﺕ ﲡﺎﺭﻳﺔ ﺗﺴﺘﺨﺪﻡ ﺗﻘﻨﻴﺔ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ، ﻭﳚﺐ ﺃﻥ‬ ‫ﻧﺄﺧﺬ ﰲ ﻋﲔ ﺍﻻﻋﺘﺒﺎﺭ ﺍﻟﺘﻨﻮﻉ ﰲ ﺷﺒﻜﺎﺕ ﺍﻟﺮﺑﻂ.‬ ‫اﻟﻨﻤ ﻮذج اﻷول: ﺍﳊﺎﺳﺐ ﺍﻟﻀﺨﻢ‬ ‫‪X-MP‬‬ ‫‪ Cray‬ﳛﺘﻮﻱ ﻋﻠﻰ ﺃﺭﺑﻊ ﻣﻌﺎﳉﺎﺕ، ﻛﻞ ﻣﻌﺎﰿ‬ ‫ﻟﻪ ﺃﺭﺑﻌﺔ ﻣﻨﺎﻓﺬ ﻟﻠﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﳝﻜﻦ ﺃﻥ ﺗﺼﻞ ﺇﱃ ٤٦ ﳐﺰﻥ ﺫﺍﻛﺮﺓ )ﺍﻟﺸﻜﻞ 01-2(.‬ ‫اﻟﺸﻜﻞ)01-2(: ﻳﻮﺿﺢ اﻟﺠﻬﺎز 84/‪ Cray X-MP‬وآﻞ ﻣﻌﺎﻟﺞ )‪ (PE‬ﻟﻪ أرﺏﻊ ﻣﻨﺎﻓﺬ ﻟﻠﺬاآﺮة اﻟﻤﺸﺘﺮآﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫93‬ ‫اﻟﻨﻤﻮذج‬ ‫اﻟﺜﺎﻥﻲ: ‪The Alliant FX/8 minisupercomputer‬‬ ‫ﳛﺘﻮﻱ ﻫﺬﺍ ﺍﳊﺎﺳﺐ ﻋﻠﻰ ﲦﺎﱐ ﻭﺣﺪﺍﺕ ﺣﺴﺎﺑﻴﺔ )‪ ،(CEs‬ﻭﺗﻘﺘـﺴﻢ ﻫـﺬﻩ ﺍﻟﻮﺣـﺪﺍﺕ‬ ‫ﺍﳊﺴﺎﺑﻴﺔ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺫﺍﻛﺮﺓ ﻣﺸﺘﺮﻛﺔ، ﻭﺗﺮﺗﺒﻂ ﺑﻄﺮﻳﻘﺔ ‪ Crossbar Switch‬ﺇﱃ ﺫﺍﻛﺮﺗﲔ ﺧﺎﺑﻴﺘﲔ‬ ‫)‪ (cache‬ﺫﻭﺍﺕ ٤٦ ﻛﻴﻠﻮ ﺑﺎﻳﺖ. ﻭﺗﺘﻌﺎﻗﺐ ﺍﻟﺬﺍﻛﺮﺗﺎﻥ ﺍﳋﺎﺑﻴﺘﺎﻥ ﺍﻟﻮﻟﻮﺝ ﺇﱃ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ‬ ‫ﻣﻦ ﺧﻼﻝ ﻧﺎﻗﻞ ﺫﻭ ﺳﻌﺔ ٨٨١ ﻣﻴﻐﺎﺑﺎﻳﺖ/ﺛﺎﻧﻴﺔ.‬ ‫اﻟﺸﻜﻞ)11-2(: ﻳﻮﺿﺢ اﻟﺠﻬﺎز 8/‪ Alliant FX‬ﺏﺜﻤﺎﻥﻴﺔ ﻣﻌﺎﻟﺠﺎت )‪ (CEs‬ﺕﺸﺘﺮك ﻓﻲ اﻟﺬاآﺮة.‬ ‫اﻟﻨﻤﻮذج‬ ‫اﻟﺜﺎﻟﺚ: ‪The Bolt,Beranek and Newman (BBN)Butterfly‬‬ ‫ﺻﻨﻊ ﻫﺬﺍ ﺍﳊﺎﺳﺐ ﰲ ﻋﺎﻡ 3891، ﻭﻫﻮ ﺣﺎﺳﺐ ﻣﺘﻮﺍﺯﻱ ﻋﺎﻡ ﺍﻟﻐﺮﺽ، ﻳﺘﻜـﻮﻥ ﻫـﺬﺍ‬ ‫ﺍﳊﺎﺳﺐ ﻣﻦ 2 ﺇﱃ 652 ﻋﻘﺪﺓ، ﻭﻛﻞ ﻋﻘﺪﺓ ﻓﻴﻬﺎ ﻣﻌﺎﰿ ﻣﻦ ﻧﻮﻉ 00086‪ MC‬ﳝﻜﻦ ﺃﻥ ﺗـﺼﻞ‬ ‫ﺫﺍﻛﺮﺓ ﻟﻠﻤﻌﺎﰿ ﺇﱃ 4 ﻣﻴﻐﺎﺑﺎﻳﺖ، ﺍﻟﻌﻘﺪ ﺗﺮﺗﺒﻂ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﻄﺮﻳﻘـﺔ ﺷـﺒﻜﺔ ﺍﻟﻔﺮﺍﺷـﺔ ﺍﳌﻠﺘﻔـﺔ‬ ‫)‪.(butterfly network‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫04‬ ‫اﻟﺸﻜﻞ )21-2(: اﻟﺠﻬﺎز ‪ BBN Butterfly‬ﻳﻤﻜﻦ ﻓﻴﻪ ﻷي ﻋﻨﺼﺮ ﻣﻌﺎﻟﺠﺔ ‪ P‬أن ﻳﺘﺤﻮل إﻟﻰ أي وﺡﺪة‬ ‫ذاآﺮة ‪M‬‬ ‫ﺇﻥ ﻟﻜﻞ ﺁﻻﺕ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ‪ MIMD‬ﲰﺎﺕ ﻣﺸﺘﺮﻛﺔ. ﻳﺰﻭﺩ ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌـﺸﺘﺮﻛﺔ‬ ‫ﳐﺰﻥ ﻟﻠﻤﱪﻣﺞ . ﻭﻫﺬﺍ ﻳﺘﻮﺍﻓﻖ ﻣﻊ ﻭﺟﻬﺔ ﺍﻟﻨﻈﺮ ﺍﻟﺘﻘﻠﻴﺪﻳﺔ ﻟﺪﻯ ﻣﻌﻈﻢ ﺍﳌﱪﳎﲔ . ﻭﻟﺬﻟﻚ ﺍﺳﺘﺨﺪﺍﻡ‬ ‫ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﺃﺳﻬﻞ ﻟﻠﱪﻧﺎﻣﺞ ﻣﻦ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﻟﺔ ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ .‬ ‫ﻭﺍﻟﻌﻴﺐ ﺍﻟﻮﺍﺿﺢ ﺍﳌﺸﺘﺮﻙ ﻫﻮ ﺍﻟﻨﻘﺎﻁ ﺍﻟﺴﺎﺧﻨﺔ )‪ (hot spot‬ﰲ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌـﺸﺘﺮﻛﺔ ﺣﻴـﺚ‬ ‫ﲢﺎﻭﻝ ﺍﳌﻌﺎﳉﺎﺕ ﺃﻥ ﺗﻜﺘﺐ ﻋﻠﻰ ﻧﻔﺲ ﺧﻠﻴﺔ ﺍﻟﺬﺍﻛﺮﺓ . ﻭ ﻷﻥ ﺍﳌﻌﺎﳉﺎﺕ ﳚﺐ ﺃﻥ ﺗﻨﺘﻈﺮ ﺣـﱴ‬ ‫ﺗﻜﻮﻥ ﺧﻠﻴﺔ ﺍﻟﺬﺍﻛﺮﺓ ﺟﺎﻫﺰﺓ ﻓﺈﻥ ﺫﻟﻚ ﳝﻜﻦ ﺃﻥ ﻳﻌﻴﻖ ﺃﺩﺍﺀ ﺍﻟﻨﻘﺎﻁ ﺍﻟﺴﺎﺧﻨﺔ .‬ ‫ﻭﺍﻟﻌﻴﺐ ﺍﻷﺧﺮ ﻫﻮ ﺃﻥ ﺍﳌﱪﻣﺞ ﻭﺍﳌﺘﺮﺟﻢ ‪ compiler‬ﻭﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﳚﺐ ﺃﻥ ﺗﻘﺮﺭ ﻛﻴـﻒ‬ ‫ُﻘﺴﻢ ﺍﻟﱪﻧﺎﻣﺞ ﻋﻠﻰ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ .‬ ‫ﻳّ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫14‬ ‫‪ 2.1.4-b‬ﺕﻤﺮﻳﺮ اﻟﺮﺳﺎﺋﻞ ‪MIMD Message Passing‬‬ ‫ﰲ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﻫﺬﺍ ﻓﺈﻥ ﻛﻞ ﻣﻌﺎﰿ ﻟﻪ ﺫﺍﻛﺮﺓ ﺩﺍﺧﻠﻴﺔ ﺧﺎﺻﺔ ﺑـﻪ، ﻭﻟﻜـﻲ ﺗﺘﻮﺍﺻـﻞ‬ ‫١‬ ‫ﺍﳌﻌﺎﳉﺎﺕ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﻓﺈﻬﻧﺎ ﺗﺮﺳﻞ ﺭﺳﺎﺋﻞ ﺇﱃ ﻛـﻞ ﻣﻌـﺎﰿ ﻋـﻦ ﻃﺮﻳـﻖ ﺷـﺒﻜﺔ ﺍﻟـﺮﺑﻂ‬ ‫)‪) .(interconnection Network‬ﺃﻧﻈﺮ ﺍﻟﺸﻜﻞ 31-2(‬ ‫اﻟﺸﻜﻞ )31-2(: ﻳﻮﺿﺢ ﻥﻤﻮذج ﺕﻤﺮﻳﺮ اﻟﺮﺳﺎﺋﻞ‬ ‫ﻭﻛﻤﺎ ﰲ ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ، ﻓﺸﺒﻜﺔ ﺍﻟﺮﺑﻂ ﳝﻜﻦ ﺃﻥ ﺗﺄﺧﺬ ﻋﺪﺓ ﺃﺷﻜﺎﻝ ﳐﺘﻠﻔـﺔ.‬ ‫ﻭﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ ﺍﻟﺸﺎﺋﻌﺔ ﰲ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﻫﻲ "ﺍﳌﻜﻌﺐ ﺍﻟﺜﻨﺎﺋﻲ ﻣﺘﻌﺪﺩ ﺍﻷﺑﻌﺎﺩ" ﺃﻭ " ‪n‬ﻣﻦ‬ ‫ﺍﻷﺑﻌﺎﺩ" )‪ ، (n-dimensional‬ﲝﻴﺚ ﺃﻥ ﻛﻞ ﺑﻌﺪ ﻣﻦ ﺍﻷﺑﻌﺎﺩ ﻳﺜﺒﺖ ﻋﻠﻴﻪ ﻣﻌﺎﳉﲔ. ﻣﺜﻼ... ﰲ‬ ‫ﹰ‬ ‫ﺍﳌﻜﻌﺐ ﺛﻼﺛﻲ ﺍﻷﺑﻌﺎﺩ ﺳﺘﻜﻮﻥ ﺍﳌﻌﺎﳉﺎﺕ ﰲ ﺯﻭﺍﻳﺎ ﺍﳌﻜﻌﺐ ﻛﻤﺎ ﰲ ﺍﻟﺸﻜﻞ)41-2(:‬ ‫اﻟﺸﻜﻞ )41-2(: ﻣﻜﻌﺐ ﺙﻼﺙﻲ اﻷﺏﻌﺎد، اﻟﻤﻌﺎﻟﺠﺎت ﺕﺘﻮﺿﻊ ﻋﻠﻰ زواﻳﺎﻩ‬ ‫1‬ ‫ﻓﻲ اﻟﺼﻔﺤﺎت ﻣﻦ ٤٤ وﺡﺘﻰ ٢٥ ﺷﺮح ﻟﺸﺒﻜﺎت اﻟﺮﺑﻂ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫24‬ ‫ﻳﺘﺸﺎﺑﻪ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﻟﺔ ﻣﻊ ﺍﻟﻠﺠﻨﺔ ﻓﻘﻂ ﰲ ﺣﺎﻝ ﺇﺫﺍ ﻛﺎﻥ ﲟﻘﺪﻭﺭ ﺍﻷﻋﻀﺎﺀ ﺃﻥ ﻳﻜﺘﺒﻮﺍ‬ ‫ﻣﻼﺣﻈﺎﻬﺗﻢ ﻟﺒﻌﻀﻬﻢ . ﺇﻥ ﺃﺳﻠﻮﺏ ﺗﻮﺟﻴﻪ ﺍﻟﺮﺳﺎﻟﺔ ﻭﺳﺮﻋﺘﻬﺎ ﻳﺸﻜﻞ ﺃﻣﺮﹰﺍ ﻫﺎﻣﺎ ﻷﺩﺍﺀ ﺍﻵﻻﺕ ﻣﻦ‬ ‫ﹰ‬ ‫ﻫﺬﺍ ﺍﻟﻨﻮﻉ .‬ ‫ﻭﳍﺬﺍ ﺍﻟﻨﻤﻮﺫﺝ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳊﺴﻨﺎﺕ، ﻣﻨﻬﺎ ﺃﻧﻪ ﻻ ﻳﻮﺟﺪ ﺫﺍﻛﺮﺓ ﻣﺸﺘﺮﻛﺔ، ﻭﻫـﺬﺍ ﻳﻌـﲏ‬ ‫ﺍﺧﺘﻔﺎﺀ ﺍﳌﺸﺎﻛﻞ ﺍﻟﻨﺎﲡﺔ ﻋﻦ ﺍﻟﺘﺪﺍﺧﻞ ﻭﺍﻟﺘﻀﺎﺭﺏ ﰲ ﺍﻟﺬﺍﻛﺮﺓ ﻭﺍﻟﱵ ﻳﻌﺎﱐ ﻣﻨﻬﺎ ﳕﻮﺫﺝ ﺍﻟـﺬﺍﻛﺮﺓ‬ ‫ﺍﳌﺸﺘﺮﻛﺔ.ﻛﻤﺎ ﺃﻥ ﺍﻟﻨﻔﺎﺫ ﺃﻭ ﺍﻟﻮﺻﻮﻝ ﺇﱃ ﺍﻟﺬﺍﻛﺮﺓ ﺳﺮﻳﻊ.‬ ‫ﳜﺪﻡ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻵﻻﺕ ﻏﺮﺿﲔ ﳘﺎ:‬ ‫ﺍﻷﻭﻝ: ﻳﻌﻤﻞ ﻛﺄﺩﺍﺓ ﺍﺗﺼﺎﻝ ﻟﻌﺒﻮﺭ ﻗﻴﻢ ﺍﳌﻌﻄﻴﺎﺕ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ .‬ ‫ﻭﺍﻟﺜﺎﱐ: ﻳﻌﻤﻞ ﻛﺂﻟﻴﺔ ﺗﺰﺍﻣﻦ ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ.‬ ‫ﻭﲟﺎ ﺃﻥ ﺍﳌﻌﺎﳉﺎﺕ ﻻ ﺗﺸﺘﺮﻙ ﰲ ﺫﺍﻛﺮﺓ ﻭﺍﺣﺪﺓ ﻓﺈﻥ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﳝﻜﻦ ﺃﻥ ﳛﺘﻮﻱ‬ ‫ﻋﻠﻰ ﻋﺪﺩ ﻛﺒﲑ ﺟﺪﹰﺍ ﻣﻦ ﺍﳌﻌﺎﳉﺎﺕ، ﲞﻼﻑ ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜـﺎﻝ...‬ ‫ﻋﻨﺪﻣﺎ ﳓﺎﻭﻝ ﺇﺿﺎﻓﺔ ﻣﻌﺎﳉﺎﺕ ﺃﻛﺜﺮ ﺇﱃ ﺍﳉﻬﺎﺯ ‪ CRAY X-MP‬ﻓﺈﻥ ﺍﻟﻌﺪﺩ ﺍﳌﻤﻜﻦ ﻟﻠﻤﻨﺎﻓﺬ ﺇﱃ‬ ‫ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ ﳏﺪﻭﺩ )ﺍﻟﺸﻜﻞ )01-2(( ﻭﻻ ﳝﻜﻦ ﰲ ﻫﺬﻩ ﺍﻟﻄﺮﻳﻘﺔ ﺯﻳﺎﺩﺓ ﻋﺪﺩ ﺍﳌﻌﺎﳉـﺎﺕ‬ ‫ﺇﱃ ﺃﻛﺜﺮ ﻣﻦ ٦١ ﻣﻌﺎﰿ.‬ ‫ﺇﻥ ﺍﻟﻌﻴﺐ ﺍﻟﺮﺋﻴﺴﻲ ﰲ ﳕﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﻫﻮ ﺍﳊﻤﻞ ﺍﻟﺰﺍﺋﺪ ﺍﳌﻠﻘﻰ ﻋﻠﻰ ﺍﳌﱪﻣﺞ، ﻓﻠـﻴﺲ‬ ‫ﻛﺎﻓﻴﺎ ﺃﻥ ﻳﻘﻮﻡ ﺍﳌﱪﻣﺞ ﺑﺘﻘﺴﻴﻢ ﺍﻟﱪﻧﺎﻣﺞ ﻋﻠﻰ ﺍﳌﻌﺎﳉﺎﺕ، ﺑﻞ ﻋﻠﻴﻪ ﺃﻳﻀﺎ ﺃﻥ ﻳﻮﺯﻉ ﺍﳌﻌﻄﻴﺎﺕ. ﺇﻥ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﺑﺮﳎﺔ ﺍﻵﻻﺕ ﻣﻦ ﻧﻮﻉ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﺗﺘﻄﻠﺐ ﻣﻦ ﺍﳌﱪﳎﲔ ﺃﻥ ﻳﻌﻴﺪﻭﺍ ﺍﻟﻨﻈﺮ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻛﻲ‬ ‫ﻳﻜﻮﻥ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻵﻟﺔ ﺃﻛﺜﺮ ﻛﻔﺎﺀﺓ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫34‬ ‫ﻣﻦ ﺍﻷﻣﺜﻠﺔ ﻷﺟﻬﺰﺓ ﺗﺴﺘﺨﺪﻡ ﻫﺬﺍ ﺍﻟﻨﻮﻉ... ‪Intel iPSC‬ﻭ ﳍـﺬﻩ ﺍﻵﻟـﺔ 821 ﻣﻌـﺎﰿ‬ ‫)821= 72 ﺃﻱ ﺃﻥ ﺍﳌﻜﻌﺐ ﺫﻭ 7 ﺃﺑﻌﺎﺩ (. ﻛﺬﻟﻚ ﺷﺮﻛﺔ ‪ nCUBE‬ﻗﺎﻣﺖ ﺑﺘﺴﻮﻳﻖ ﺁﻟﺔ ﳍـﺎ‬ ‫2918 ﻣﻌﺎﰿ )2918=312، ﺃﻱ ﺃﻥ ﺍﳌﻜﻌﺐ ﺫﻭ 31 ﺑﻌﺪ(.‬ ‫ﻭﺑﺎﻟﺮﻏﻢ ﻣﻦ ﺃﻥ ﺃﻛﺜـﺮ ﺍﻵﻻﺕ ﺍﻟﺘﺠﺎﺭﻳـﺔ ﺗﺘﺒـﻊ ﻷﺣـﺪ ﺃﺻـﻨﺎﻑ ﻓﻼﻳـﻦ )‬ ‫‪ (SIMD,MIMD‬ﻓﺈﻧﻪ ﻳﻮﺟﺪ ﺑﻌﺾ ﺍﻟﺘﺼﺎﻣﻴﻢ ﻻ ﺗﺘﺒﻊ ﻟﺘﺼﻨﻴﻒ ﻓﻼﻳﻦ، ﻣﺜﻼ... ﺑﻌﺾ ﺍﻵﻻﺕ‬ ‫ﹰ‬ ‫ﻣﺜﻞ ﺍﻵﻟﺔ ‪ ICL DAP‬ﳝﻜﻦ ﺃﻥ ﺗﻮﺿﻊ ﰲ ﺃﻛﺜﺮ ﻣﻦ ﺗﺼﻨﻴﻒ. ﻭﺍﻟﺒﻌﺾ ﺍﻵﺧﺮ ﻣﺜﻞ ‪dataflow‬‬ ‫ﻭ ‪ reduction‬ﻻ ﺗﺘﺒﻊ ﻷﻱ ﺗﺼﻨﻴﻒ ﻣﻦ ﺗﺼﻨﻴﻔﺎﺕ ﻓﻼﻳﻦ.‬ ‫,‪SISD‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫44‬ ‫2.2 ﺷﺒﻜﺎﺕ ﺍﻟﺮﺑﻂ )‪(Interconnection Networks‬‬ ‫ﺗﻌﺪ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ ﺃﺩﺍﺀ ﺍﻟﻮﺻﻞ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﻭﺍﻟﺬﺍﻛﺮﺓ ﰲ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ . ﻭ ﺗﺘـﺄﻟﻒ‬ ‫ﻋﺎﺩﺓ ﻣﻦ ﺧﻄﻮﻁ ﺍﺗﺼﺎﻝ ﻭﻋﻨﺎﺻﺮ ﻣﺘﺨﺼﺼﺔ ﻟﻨﻘـﻞ ﺍﳌﻌﻠﻮﻣـﺎﺕ ﻛﺎﳌﺒـﺪﻻﺕ )‪(switches‬‬ ‫ﻭﺍﳌﻮﺟﻬﺎﺕ )‪ .(Routers‬ﲣﺘﻠﻒ ﰲ ﺑﻨﻴﺘﻬﺎ ﻭﰲ ﺍﺳﺘﺨﺪﺍﻣﻬﺎ ﻋﻦ ﺍﻟﺸﺒﻜﺎﺕ ﺍﳊﺎﺳﻮﺑﻴﺔ ﺍﻟﱵ ﺗﺼﻞ‬ ‫ﻋﺎﺩﺓ ﺑﲔ ﺍﳊﺎﺳﺒﺎﺕ . ﻓﺸﺒﻜﺔ ﻟﺮﺑﻂ ﰲ ﺍﳊﺎﺳﻮﺏ ﺍﳌﺘﻮﺍﺯﻱ ﺗﺼﻞ ﺑﲔ ﻣﻌﺎﳉـﺎﺕ ﻣﺘﻘﺎﺭﺑـﺔ ﻭ‬ ‫ﺍﳌﺴﺎﻓﺔ ﺍﻟﻔﺎﺻﻠﺔ ﺑﻴﻨﻬﺎ ﺻﻐﲑﺓ ﺟﺪﹰﺍ، ﻭﻗﺪ ﺗﺘﻮﺍﺟﺪ ﻫﺬﻩ ﺍﳌﻌﺎﳉﺎﺕ ﻋﻠﻰ ﺍﻟﺒﻄﺎﻗـﺔ ﻧﻔـﺴﻬﺎ . ﺃﻣـﺎ‬ ‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﳊﺎﺳﻮﺑﻴﺔ ﻓﻬﻲ ﺗﺮﺑﻂ ﺣﺎﺳﺒﺎﺕ ﻣﺘﺒﺎﻋﺪﺓ ﺗـﺼﻞ ﺍﳌـﺴﺎﻓﺔ ﻓﻴﻤـﺎ ﺑﻴﻨـﻬﺎ ﺇﱃ ﺁﻻﻑ‬ ‫ﺍﻟﻜﻴﻠﻮﻣﺘﺮﺍﺕ ﰲ ﺑﻌﺾ ﺍﻷﺣﻴﺎﻥ .‬ ‫ﻳﻌﱪ ﺗﺮﺍﺳﻞ ﺍﳌﻌﻄﻴﺎﺕ ﻋﱪ ﺍﻟﺸﺒﻜﺎﺕ ﺍﳊﺎﺳﻮﺑﻴﺔ ﻋﻦ ﺍﻟﻮﻟﻮﺝ ﺇﱃ ﺣﺎﺳﻮﺏ ﺑﻌﻴـﺪ ﺟﻐﺮﺍﻓﻴـﺎ‬ ‫ﹰ‬ ‫ﻬﺑﺪﻑ ﺍﺳﺘﺨﻼﺹ ﺑﻌﺾ ﺍﳌﻌﻄﻴﺎﺕ ﺃﻭ ﻟﺘﻨﻔﻴﺬ ﺑﻌﺾ ﺍﻟﱪﺍﻣﺞ ﺍﳊﺎﺳﻮﺑﻴﺔ ﻋﻦ ﺑﻌﺪ. ﺃﻣـﺎ ﺗﺮﺍﺳـﻞ‬ ‫ﺍﳌﻌﻄﻴﺎﺕ ﻋﱪ ﺷﺒﻜﺔ ﺍﳊﺎﺳﻮﺏ ﺍﳌﺘﻮﺍﺯﻱ ﻓﻴﻌﱪ ﻋﻦ ﺍﻟﺘﺒﻌﻴﺔ ﺍﳌﻮﺟﻮﺩﺓ ﺑـﲔ ﻣﻬﻤـﺎﺕ ﺍﻟﱪﻧـﺎﻣﺞ‬ ‫ﺍﳌﺘﻮﺍﺯﻱ ﻭﻋﻦ ﺍﻻﺭﺗﺒﺎﻁ ﺑﲔ ﺍﳌﺘﺤﻮﻻﺕ ﺍﳌﺴﺘﺨﺪﻣﺔ ﰲ ﺍﳌﻬﻤﺎﺕ ﺍﳌﺨﺘﻠﻔﺔ ﻭﺑﻔﻀﻞ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ‬ ‫ﻭ ﺗﻘﻨﻴﺎﻬﺗﺎ ﺍﻟﱵ ﺗﺘﻄﻮﺭ ﺑﺸﻜﻞ ﻣﺴﺘﻤﺮ ﻓﻘﺪ ﲤﻜﻦ ﺍﳌﺼﻤﻤﻮﻥ ﻣﻦ ﺑﻨﺎﺀ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳـﺔ ﺫﺍﺕ‬ ‫ﺍﻟﺬﻭﺍﻛﺮ ﺍﳌﻮﺯﻋﺔ ، ﻭ ﲤﻜﻦ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ ﰲ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻣﻦ ﺗﻨﻔﻴﺬ ﻣﻬﻤﺎﺕ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ‬ ‫ﺭﻏﻢ ﻭﺟﻮﺩ ﺍﻟﺘﺒﻌﻴﺔ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ . ﻭ ﺗﻠﻌﺐ ﺗﺒﻮﻟﻮﺟﻴﺎ ﺷﺒﻜﺔ ﺍﻟﺮﺑﻂ ﻭﺩﺭﺟﺔ ﻗﺮﺏ ﺍﳌﻌﺎﳉـﺎﺕ ﻣـﻦ‬ ‫ﺑﻌﻀﻬﺎ ﺩﻭﺭﹰﺍ ﻫﺎﻣﺎ ﰲ ﺑﺮﳎﺔ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ . ﻭ ﺗﻄﻠﻖ ﻫﺬﻩ ﺍﻟﺼﻔﺔ ﺃﻳﻀﺎ ﻋﻠﻰ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟـﱵ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﺗﺴﻤﺢ ﺑﺘﻐﲑ ﻃﺮﻳﻖ ﻧﻘﻞ ﺍﳌﻌﻄﻴﺎﺕ ﺩﻳﻨﺎﻣﻴﻜﻴﺎ ﺑﲔ ﻣﻌﺎﳉﲔ ، ﻭ ﻳﺘﻢ ﻫﺬﺍ ﺍﻟﺘﻐﻴﲑ ﻬﺑﺪﻑ ﺇﻧﺸﺎﺀ ﻃﺮﻕ‬ ‫ﹰ‬ ‫ﺟﺪﻳﺪﺓ ﻟﺘﺮﺍﺳﻞ ﺍﳌﻌﻄﻴﺎﺕ ، ﻭ ﺗﺮﺗﺒﻂ ﻫﺬﻩ ﺍﳋﺎﺻﺔ ﺃﻳﻀﺎ ﺑﺎﻟﺸﺒﻜﺎﺕ ﺍﻟﺪﻳﻨﺎﻣﻴﻜﻴﺔ . ﻭﺑﺎﻟﺮﻏﻢ ﻣـﻦ‬ ‫ﹰ‬ ‫ﺗﻨﻮﻉ ﺧﺎﺻﻴﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺇﻻ ﺃﻥ ﺧﺎﺻﻴﺔ ﺛﺒﻮﺗﻴﺔ ﻃﺮﻕ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﺗﺒﻘﻰ ﺃﳘﻬﺎ .‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫54‬ ‫1.2.2 اﻟﺸﺒﻜﺎت اﻟﺴﻜﻮﻥﻴﺔ‬ ‫ﺗﺘﻤﻴﺰ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﺴﻜﻮﻧﻴﺔ ﺑﺘﺒﻮﻟﻮﺟﻴﺎ ﺛﺎﺑﺘﺔ . ﻭﺗﺘﻤﺜﻞ ﲟﺨﻄﻂ ﺑﻴﺎﱐ ﻏﲑ ﻣﻮﺟﻪ ﻋﻘﺪﻩ ﻫﻲ ﻋﻘﺪ‬ ‫ﻣﻌﺎﳉﺔ ﺍﳊﺎﺳﻮﺏ ﺍﳌﺘﻮﺍﺯﻱ ﻭﺧﻄﻮﻃﻪ ﻫﻲ ﺧﻄﻮﻁ ﺍﻻﺗﺼﺎﻝ . ﻭﻟﻜﻞ ﻋﻘﺪﻩ ﰲ ﻫـﺬﻩ ﺍﳌﺨﻄـﻂ‬ ‫ﺩﺭﺟﺔ ﺗﺘﻤﺜﻞ ﺑﻌﺪﺩ ﺧﻄﻮﻁ ﺍﺗﺼﺎﳍﺎ ﻣﻊ ﺍﻟﻌﻘﺪ ﺍﻷﺧﺮﻯ ﻭ ﺗﺪﻝ ﺍﻟﺪﺭﺟﺔ ﻋﻠﻰ ﺇﻣﻜﺎﻧﻴـﺔ ﺗﺒـﺎﺩﻝ‬ ‫ﺍﳌﻌﻄﻴﺎﺕ ﺑﺸﻜﻞ ﻣﺒﺎﺷﺮ ﺑﲔ ﺍﻟﻌﻘﺪﺓ ﻭﺍﻷﺧﺮﻯ .‬ ‫اﻟﺸﻜﻞ )51-2( : ﺕﻤﺜﻞ اﻟﺸﺒﻜﺔ اﻟﺴﻜﻮﻥﻴﺔ ﺏﻤﺨﻄﻂ ﺏﻴﺎﻥﻲ‬ ‫ﺗﻜﻮﻥ ﺍﻟﺸﺒﻜﺔ ﻣﻨﺘﻈﻤﺔ ‪ Regular‬ﺇﺫﺍ ﻛﺎﻧﺖ ﲨﻴﻊ ﻋﻘﺪﻫﺎ ﻣﻦ ﻧﻔﺲ ﺍﻟﺪﺭﺟﺔ . ﻭ ﺗﺘﻤﻴﺰ ﺍﻟﺸﺒﻜﺔ‬ ‫ﺍﻟﺴﻜﻮﻧﻴﺔ ﺃﻳﻀﺎ ﺑﻘﻄﺮﻫﺎ ﻭﻫﻮ ﺍﳌﺴﺎﻓﺔ ﺍﻟﻌﻈﻤﻰ ﺑﲔ ﻋﻘﺪﺗﲔ ﻣﻦ ﻋﻘﺪ ﺍﻟﺸﺒﻜﺔ .‬ ‫ﹰ‬ ‫ﺗﻘﺎﺭﻥ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﺴﻜﻮﻧﻴﺔ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﺪﻻﻟﺔ ﺩﺭﺟﺘﻬﺎ ﻭﻗﻄﺮﻫـﺎ ، ﻭﺗﻔـﻀﻞ ﺍﻟـﺸﺒﻜﺎﺕ ﺫﺍﺕ‬ ‫ﺍﻟﺪﺭﺟﺔ ﺍﻷﻋﻠﻰ ﻭﺫﺍﺕ ﺍﻟﻘﻄﺮ ﺍﻷﺻﻐﺮ ﻓﺎﻟﺪﺭﺟﺔ ﺍﻟﻌﺎﻟﻴﺔ ﺗﺪﻝ ﻋﻠﻰ ﻣﺮﻭﻧﺔ ﺍﻻﺗﺼﺎﻝ ﺑـﲔ ﺍﻟﻌﻘـﺪ‬ ‫ﺍﳌﺨﺘﻠﻔﺔ ﳑﺎ ﻳﺴﺎﻫﻢ ﰲ ﺗﺴﻬﻴﻞ ﺑﺮﳎﺔ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ، ﺃﻣﺎ ﺍﻟﻘﻄﺮ ﺍﻷﺻﻐﺮ ﻓﻴﺪﻝ ﻋﻠﻰ ﻣـﺴﺎﻓﺎﺕ‬ ‫ﺻﻐﲑﺓ ﺑﲔ ﺍﻟﻌﻘﺪ ﺍﳌﺨﺘﻠﻔﺔ ﻭ ﻳﺘﺮﺟﻢ ﺑﺰﻣﻦ ﺍﺗﺼﺎﻝ ﺃﺻﻐﺮ ﺑﲔ ﺍﳌﻌﺎﳉـﺎﺕ . ﺃﻛﺜـﺮ ﺍﻟـﺸﺒﻜﺎﺕ‬ ‫ﺍﻟﺴﻜﻮﻧﻴﺔ ﺷﻬﺮﺓ ﻭ ﺍﺳﺘﺨﺪﺍﻣﺎ ﻫﻲ :‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫64‬ ‫1.1.2.2 اﻟﺸﺒﻜﺔ اﻟﺨﻄﻴﺔ واﻟﺤﻠﻘﻴﺔ‬ ‫ﻭ ﺗﺘﺄﻟﻒ ﺍﻟﺸﺒﻜﺔ ﺍﳋﻄﻴﺔ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﻟﻌﻘﺪ ﺗﺘﺼﻞ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﺸﻜﻞ ﺧﻄﻲ ، ﻭﻳﻜﻮﻥ ﻟﻜﻞ‬ ‫ﻋﻘﺪﺓ ﻋﻘﺪﺗﲔ ﳎﺎﻭﺭﺗﲔ ﻣﺎ ﻋﺪﺍ ﺍﻟﻌﻘﺪﺓ ﺍﻷﻭﱃ ﻭﺍﻷﺧﲑﺓ ﺣﻴﺚ ﻳﻜﻮﻥ ﻟﻜﻞ ﻣﻨﻬﻤﺎ ﻋﻘﺪﺓ ﳎﺎﻭﺭﺓ‬ ‫ﻭﺣﻴﺪﺓ . ﻭﻗﻄﺮ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﻳﺘﻨﺎﺳﺐ ﻣﻊ ﻋﺪﺩ ﻋﻘﺪﻫﺎ .‬ ‫ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﻏﲑ ﻣﻨﻈﻤﺔ ﻭﻳﻠﺠﺄ ﻋﺎﺩﺓ ﻟﻮﺻﻞ ﻃﺮﻓﻴﻬﺎ ﲝﻴﺚ ﺗﺼﺒﺢ ﺷﺒﻜﺔ ﺣﻠﻘﻴـﺔ ﻣﻨﺘﻈﻤـﺔ‬ ‫ﺩﺭﺟﺘﻬﺎ2 ﻭﻗﻄﺮﻫﺎ ﻳﺴﺎﻭﻱ ﺇﱃ ﻧﺼﻒ ﻋﺪﺩ ﺍﻟﻌﻘﺪ ﻭﻧﻈﺮﹰﺍ ﻟﻠﺘﻨﺎﻇﺮ ﺗﺘﻤﻴﺰ ﺷﺒﻜﺔ ﺍﻟـﺮﺑﻂ ﺑـﺴﺮﻋﺔ‬ ‫ﻧﻘﻠﻬﺎ ﻟﻠﻤﻌﻠﻮﻣﺎﺕ ، ﻭﺗﺆﺛﺮ ﺧﻄﻮﻁ ﺍﻟﻮﺻﻞ ﻭﻋﻨﺎﺻﺮ ﺍﻻﺗﺼﺎﻝ ﺗﺄﺛﲑﹰﺍ ﻣﺒﺎﺷﺮﹰﺍ ﻋﻠﻰ ﻫﺬﻩ ﺍﻟـﺴﺮﻋﺔ‬ ‫ﺍﻟﱵ ﺗﻘﺎﺱ ﺑﻌﺪﺩ ﺍﳋﺎﻧﺎﺕ ﺍﻟﱵ ﳝﻜﻦ ﻧﻘﻠﻬﺎ ﰲ ﺍﻟﺜﺎﻧﻴﺔ ‪.M bits/s‬‬ ‫اﻟﺸﻜﻞ )61-2(‬ ‫ﺗﺴﺘﺨﺪﻡ ﰲ ﺑﻌﺾ ﺍﳊﺎﺳﺒﺎﺕ ﻣﻌﺎﳉﺎﺕ ﺧﺎﺻﺔ ﻣﻦ ﺃﺟﻞ ﺇﺩﺭﺍﺓ ﻭﺑﺮﳎﺔ ﻋﻤﻠﻴﺔ ﻧﻘﻞ ﺍﳌﻌﻄﻴﺎﺕ ﻛﻤﺎ‬ ‫ﰲ ﻋﻘﺪﺓ ﺣﺎﺳﻮﺏ ‪ Paragon‬ﺣﻴﺚ ﻳﺴﺘﺨﺪﻡ ﻣﻌﺎﰿ 068‪ i‬ﻹﺩﺍﺭﺓ ﺍﻻﺗﺼﺎﻝ .‬ ‫2.1.2.2 اﻟﺸﺒﻜﺔ اﻟﻤﺼﻔﻮﻓﻴﺔ و اﻟﻤﺼﻔﻮﻓﻴﺔ اﻟﺤﻠﻘﻴﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫74‬ ‫ﺗﻜﻮﻥ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﻋﻠﻰ ﺷﻜﻞ ﻣﺼﻔﻮﻓﺔ ﻟﻜﻞ ﻋﻘﺪﺓ ﺩﺍﺧﻠﻴﺔ ﺃﺭﺑﻊ ﻋﻘﺪ ﳎﺎﻭﺭﺓ ، ﺃﻣـﺎ ﺍﻟﻌﻘـﺪ‬ ‫ﺍﻟﻄﺮﻓﻴﺔ ﻓﺪﺭﺟﺘﻬﺎ )3 ‪ . (2 or‬ﻭﺍﻟﺸﺒﻜﺔ ﺍﳌﻨﺘﻈﻤﺔ ﺍﳌﺼﻔﻮﻓﻴﺔ ﺍﳊﻠﻘﻴﺔ ﻫﻲ ﺍﻷﻛﺜﺮ ﺍﺳﺘﺨﺪﺍﻣﺎ ﻧﻈـﺮﹰﺍ‬ ‫ﹰ‬ ‫ﻟﻠﺘﻨﺎﻇﺮ ﺑﲔ ﺍﻟﻌﻘﺪ ﻣﻦ ﻧﺎﺣﻴﺔ ﻭﻟﺴﻬﻮﻟﺔ ﺑﺮﳎﺘﻬﺎ ﻣﻦ ﻧﺎﺣﻴﺔ ﺃﺧﺮﻯ ﺍﻧﻈﺮ ﺍﻟﺸﻜﻞ )71-2(.‬ ‫اﻟﺸﻜﻞ )71-2 (: اﻟﺸﺒﻜﺔ اﻟﻤﺼﻔﻮﻓﻴﺔ و اﻟﻤﺼﻔﻮﻓﻴﺔ اﻟﺤﻠﻘﻴﺔ‬ ‫ﺗﺴﺘﺨﺪﻡ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﻛﺜﲑﹰﺍ ﰲ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺍﳊﺎﻟﻴﺔ ﻧﻈﹰﺍ ﳌﺮﻭﻧﺔ ﺍﻻﺗـﺼﺎﻝ ﺑـﲔ‬ ‫ﺍﻟﻌﻘﺪ . ﻭ ﻳﻌﺘﱪ ﻣﻌﺎﰿ ﺍﻟﺘﺮﺍﻧﺴﺒﻴﻮﺗﺮ ﻣﻦ ﺃﻛﺜﺮ ﺍﳌﻌﺎﳉﺎﺕ ﻣﻼﺀﻣﺔ ﳍﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻟﺘﺒﻮﻟﻮﺟﻴﺎ ﻧﻈـﺮﹰﺍ‬ ‫ﻻﺣﺘﻮﺍﺋﻪ ﺩﺍﺧﻠﻴﺎ ﻋﻠﻰ ﺃﺭﺑﻊ ﻗﻨﻮﺍﺕ ﺍﺗﺼﺎﻝ .‬ ‫ﹰ‬ ‫ﺗﺘﻼﺀﻡ ﻫﺬﻩ ﺍﻟﺘﺒﻮﻟﻮﺟﻴﺎ ﻣﻊ ﺧﻮﺍﺭﺯﻣﻴﺎﺕ ﻣﻌﺎﳉﺔ ﺍﻟﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ ﺍﻟﱵ ﺗﻌﺘﻤﺪ ﻋﻠـﻰ ﺣـﺴﺎﺏ‬ ‫ﺍﺠﻤﻟﺎﻭﺭﺍﺕ ﻭﻫﻲ ﺃﻛﺜﺮ ﺍﻟﺸﺒﻜﺎﺕ ﺍﺳﺘﺨﺪﺍﻣﺎ ﰲ ﻫﺬﺍ ﺍﺠﻤﻟﺎﻝ ﻭﺭﻏﻢ ﻣﺮﻭﻧﺔ ﺍﻻﺗﺼﺎﻝ ﺑﲔ ﺍﺠﻤﻟـﺎﻭﺭﺍﺕ‬ ‫ﹰ‬ ‫ﻓﺈﻥ ﺍﻟﺘﺮﺍﺳﻞ ﺑﲔ ﺍﻟﻌﻘﺪ ﺍﳌﺘﺒﺎﻋﺪﺓ ﳛﺘﺎﺝ ﺇﱃ ﻋﻤﻠﻴﺎﺕ ﺗﺴﻴﲑ ﻭﻻﺑﺪ ﻣﻦ ﺇﺩﺍﺭﻬﺗﺎ ﺳﻮﺍﺀ ﺑﺎﺳـﺘﺨﺪﺍﻡ‬ ‫ً‬ ‫ﺍﳌﻜﺘﺒﺎﺕ ﺍﳋﺎﺻﺔ ﺃﻭ ﺑﱪﳎﺘﻬﺎ ﻣﻦ ﻗﺒﻞ ﺍﳌﺴﺘﺜﻤﺮ .‬ ‫3.1.2.2 اﻟﺸﺒﻜﺎت اﻟﺸﺠﺮﻳﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫84‬ ‫ﺗﺘﻨﻮﻉ ﺃﺷﻜﺎﻝ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﺗﺒﻌﺎ ﻟﻌﺪﺩ ﺍﻟﻌﻘﺪ ﺍﳌﺘﺼﻠﺔ ﺑﻜﻞ ﻋﻘﺪﺓ ﺃﻡ ﻭﲣﺘﻠﻒ ﺩﺭﺟﺘـﻬﺎ‬ ‫ﹰ‬ ‫ﻭﻗﻄﺮﻫﺎ ﻭﻓﻘﺎ ﻟﺬﻟﻚ ﺍﻧﻈﺮ ﺍﻟﺸﻜﻞ )81-2( . ﺗﺘﻼﺋﻢ ﻫﺬﻩ ﺍﻟﺘﺒﻮﻟﻮﺟﻴـﺎ ﻣـﻊ ﺍﻟﱪﳎـﺔ ﺍﳌﺘﻮﺍﺯﻳـﺔ‬ ‫ﹰ‬ ‫ﺍﻟﺪﻳﻨﺎﻣﻴﻜﻴﺔ ) ﻋﺪﺩ ﺍﳌﻬﻤﺎﺕ ﻏﲑ ﳏﺪﺩ ﻣﺴﺒﻘﺎ ( ﻭﻣﻊ ﺍﻟﱪﺍﻣﺞ ﻣﻦ ﳕﻂ ﺍﻟﺴﻴﺪ ﻭﺍﳋﺪﻡ ﺃﻭ ﺗﻠﻚ ﺍﻟﱵ‬ ‫ﹰ‬ ‫ﺗﺴﺘﺜﻤﺮ ﺍﻟﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻋﺪﺓ ﻣﺴﺘﻮﻳﺎﺕ : ﻣﺴﺘﻮﻯ ﺍﻹﺟﺮﺍﺋﻴﺔ ﻭ ﻣﺴﺘﻮﻯ ﺍﻟﺘﻌﻠﻴﻤﺔ ﻣﺜﻼ ﻭ ﳜـﺼﺺ‬ ‫ﹰ‬ ‫ﻛﻞ ﻣﺴﺘﻮﻯ ﻣﻦ ﻣﺴﺘﻮﻳﺎﺕ ﺍﻟﺸﺠﺮﺓ ﳌﻌﺎﳉﺔ ﻣﺴﺘﻮﻯ ﻣﻦ ﺍﻟﺘﻮﺍﺯﻱ . ﺗﺴﺘﺨﺪﻡ ﻫﺬﻩ ﺍﻟﺘﺒﻮﻟﻮﺟﻴـﺎ‬ ‫ﺑﺎﻟﺸﻜﻞ ﺍﳌﻌﻘﺪ ﻛﻤﺎ ﰲ ﺣﺎﺳﻮﺏ 5-‪. CM‬‬ ‫اﻟﺸﻜﻞ )81-2( : اﻟﺸﺒﻜﺔ اﻟﺸﺠﺮﻳﺔ‬ ‫4.1.2.2 اﻟﺸﺒﻜﺎت اﻟﻤﻜﻌﺒﻴﺔ‬ ‫ﺗﺘﻤﻴﺰ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﺑﻄﺮﻳﻘﺔ ﺑﻨﺎﺋﻬﺎ ﺍﻟﺘﺼﺎﻋﺪﻱ ﻭ ﺑﺪﺭﺟﺘﻬﺎ ﺍﻟﱵ ﺗﺴﺎﻭﻱ ﻗﻄﺮﻫﺎ . ﺗﺘـﺄﻟﻒ‬ ‫ﺍﻟﺸﺒﻜﺔ ﺍﳌﻜﻌﺒﻴﺔ ﺫﺍﺕ ﺍﻟﺪﺭﺟﺔ ‪ n‬ﻣﻦ ‪ 2n‬ﻋﻘﺪﺓ ﻭ ﻳﻜﻮﻥ ﻟﻜﻞ ﻋﻘﺪﺓ ‪ n‬ﻋﻘﺪﺓ ﳎﺎﻭﺭﺓ ﻭ ﻗﻄـﺮ‬ ‫ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﻳﺴﺎﻭﻱ ﺩﺭﺟﺘﻬﺎ . ﻓﺎﻟﺸﺒﻜﺔ ﺍﳌﻜﻌﺒﻴﺔ ﻣﻦ ﺩﺭﺟﺔ ١ ﺗﺘﺄﻟﻒ ﻣﻦ ﻋﻘﺪﺗﲔ ﻣﺘـﺼﻠﺘﲔ‬ ‫ﻭﺍﻟﺸﺒﻜﺎﺕ ﺫﺍﺕ ﺍﻟﺪﺭﺟﺔ ٢ ﺗﺘﺄﻟﻒ ﻣﻦ ٤ ﻋﻘﺪ ﻋﻠﻰ ﺷﻜﻞ ﻣﺮﺑﻊ ، ﺍﻟﺸﻜﻞ )91-2( . ﳝﻜـﻦ‬ ‫ﺑﻨﺎﺀ ﺍﻟﺸﺒﻜﺔ ﺍﳌﻜﻌﺒﻴﺔ ﺫﺍﺕ ﺍﻟﺪﺭﺟﺔ )‪ (n‬ﺍﺑﺘﺪﺍﺀ ﻣﻦ ﺷﺒﻜﺘﲔ ﻣﻦ ﺍﻟﺪﺭﺟﺔ 1-‪ ، n‬ﺣﻴﺚ ﳝﻜـﻦ‬ ‫ً‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫94‬ ‫ﺑﻨﺎﺀ ﺍﻟﺸﺒﻜﺔ ﺍﳌﻜﻌﺒﻴﺔ ﺫﺍﺕ ﺍﻟﺪﺭﺟﺔ ٣ ﻣﻦ ﺷﺒﻜﺘﲔ ﺫﻭﺍﺕ ﺍﻟﺪﺭﺟﺔ ٢ . ﻣﻦ ﺃﻫﻢ ﳑﻴﺰﺍﺕ ﻫـﺬﻩ‬ ‫ﺍﻟﺘﺒﻮﻟﻮﺟﻴﺎ ﻫﻮ ﻗﻄﺮﻫﺎ ﺍﻟﺼﻐﲑ ﻧﺴﺒﺔ ﺇﱃ ﻋﺪﺩ ﻋﻘﺪﻫﺎ .‬ ‫اﻟﺸﻜﻞ)91-2( : ﺷﺒﻜﺔ ﻣﻜﻌﺒﻴﺔ ﻣﻦ اﻟﺪرﺟﺔ ٢‬ ‫2.2.2 اﻟﺸﺒﻜﺎت اﻟﺪﻳﻨﺎﻣﻴﻜﻴﺔ‬ ‫ﺗﺘﺄﻟﻒ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﻋﻨﺎﺻﺮ ﺍﻻﺗﺼﺎﻝ ﺍﻟﱵ ﺗﺴﻤﺢ ﲟﺮﻭﺭ ﺍﳌﻌﻄﻴﺎﺕ ﺑﺎﲡﺎﻫﺎﺕ‬ ‫ﳐﺘﻠﻔﺔ . ﳛﺪﺩ ﻃﺮﻳﻖ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻣﻦ ﻣﻌﺎﰿ ﻣﺎ ﺇﱃ ﻣﻌﺎﰿ ﺁﺧﺮ ﻋﻨﺪ ﺗﻨﻔﻴﺬ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺮﺍﺳـﻞ ﻭ‬ ‫ﻳﺆﺧﺬ ﲪﻞ ﺍﻟﺸﺒﻜﺔ ﺑﻌﲔ ﺍﻻﻋﺘﺒﺎﺭ ﻋﻨﺪ ﲢﺪﻳﺪ ﻫﺬﺍ ﺍﻟﻄﺮﻳﻖ. ﺗﺘﻜﻮﻥ ﻫﺬﻩ ﺍﻟـﺸﺒﻜﺎﺕ ﺑـﺸﻜﻞ‬ ‫ﺭﺋﻴﺴﻲ ﻣﻦ ﻣﺒﺪﻻﺕ )‪ (Switchs‬ﻭﻣﻮﺟﻬﺎﺕ )‪ . (Router‬ﺗﺘﻤﻴﺰ ﺍﳌﺒﺪﻟﺔ ﺑﻌﺪﺩ ﻣـﻦ ﺧﻄـﻮﻁ‬ ‫ﺍﻟﺪﺧﻞ ﻭ ﻋﺪﺩ ﳑﺎﺛﻞ ﻣﻦ ﺧﻄﻮﻁ ﺍﳋﺮﺝ ، ﻭ ﺗﺴﻤﺢ ﺑﻨﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻣﻦ ﺃﻱ ﺧﻂ ﺩﺧﻞ ﺇﱃ ﺃﻱ‬ ‫ﺧﻂ ﺧﺮﺝ ﺷﺮﻁ ﺃﻥ ﻳﻜﻮﻥ ﺍﳋﻄﺎﻥ ﻏﲑ ﻣﺸﻐﻮﻟﲔ ﺑﻌﻤﻠﻴﺔ ﻧﻘﻞ ﻗﻴﺪ ﺍﻟﺘﻨﻔﻴﺬ .‬ ‫ﺃﺷﻬﺮ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﺪﻳﻨﺎﻣﻴﻜﻴﺔ ﺍﳌﺴﺘﺨﺪﻣﺔ ﰲ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻫﻲ :‬ ‫1.2.2.2 ﺷﺒﻜﺔ اﻟﻨﺎﻗﻞ‬ ‫ﻳﻌﺘﱪ ﺍﻟﻨﺎﻗﻞ ﻋﻨﺼﺮ ﺍﻻﺗﺼﺎﻝ ﺍﳌﺸﺘﺮﻙ ﻭ ﺍﻟﻮﺣﻴﺪ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﺃﻭ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﻣﻦ ﺟﻬـﺔ‬ ‫ﻭﺍﻟﺬﺍﻛﺮﺓ ﻣﻦ ﺟﻬﺔ ﺃﺧﺮﻯ ﰲ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ . ﻭ ﻟﻠﺘﻨﻈﻴﻢ ﺍﳌﺘﺸﺎﺭﻙ ﻻﺑﺪ ﻣﻦ ﻭﺟﻮﺩ ﻋـﺪﺩ ﻣـﻦ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﻟﱵ ﲤﻜﻦ ﻣﻦ ﺗﻨﻔﻴﺬ ﻋﻤﻠﻴﺎﺕ ﺍﻟﺘﺮﺍﺳﻞ ﺍﳌﺨﻠﺘﻔﺔ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﺘﻌﺪﺩﺓ . ﺗﻌﺘﻤـﺪ‬ ‫ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻣﺒﺪﺃ ﺍﻷﻓﻀﻠﻴﺔ ﺣﻴﺚ ﲢﺪﺩ ﻟﻜﻞ ﻣﻌﺎﰿ ﺃﻓﻀﻠﻴﺔ ﺗﻌﺘﱪ ﻋﻨـﺪ ﲢﻘﻴـﻖ ﻋﻤﻠﻴـﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫05‬ ‫ﺍﻟﺘﺮﺍﺳﻞ ، ﻭ ﺗﻌﺘﻤﺪ ﻣﺒﺪﺃ ‪ (First In First Out)FIFO‬ﺣﻴﺚ ﻳﺘﻢ ﺗﻨﻔﻴﺬ ﻋﻤﻠﻴﺎﺕ ﺍﻟﺘﺮﺍﺳﻞ ﻭﻓﻖ‬ ‫ﺗﺮﺗﻴﺐ ﻃﻠﺒﻬﺎ ﻭﺗﻮﺟﺪ ﺧﻮﺍﺭﺯﻣﻴﺎﺕ ﺃﺧﺮﻯ ﺗﻌﺘﻤﺪ ﻣﺒﺪﺃ ﺍﻷﻓﻀﻠﻴﺔ ﺍﳌﺘﻐﲑﺓ .‬ ‫2.2.2.2 ﻣﺼﻔﻮﻓﺔ اﻟﻤﺒﺪﻻت‬ ‫ﺗﺘﺄﻟﻒ ﻫﺬﻩ ﺍﳌﺼﻔﻮﻓﺔ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﳌﺒﺪﻻﺕ ﻛﻞ ﻣﻨﻬﺎ ﺫﺍﺕ ﻣﺪﺧﻠﲔ ﻭ ﳐﺮﺟﲔ . ﻭ ﺗﺘﻤﻴﺰ ﻛﻞ‬ ‫ﻣﺒﺪﻟﺔ ﲝﺎﻟﺘﲔ ﻛﻤﺎ ﻫﻮ ﻣﺒﲔ ﰲ ﺍﻟﺸﻜﻞ )02-2( :‬ ‫اﻟﺸﻜــــﻞ )02-2(‬ ‫ﺍﳊﺎﻟﺔ )0( : ﻳﺘﻢ ﻓﻴﻬﺎ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻣﻦ ﺍﳌﺪﺧﻞ )0( ﺇﱃ ﺍﳌﺨﺮﺝ )0( ﻭﻣﻦ ﺍﳌﺪﺧﻞ )1(ﺇﱃ‬ ‫ﺍﳌﺨﺮﺝ )1 (.‬ ‫ﺍﳊﺎﻟﺔ )1( : ﻳﺘﻢ ﻓﻴﻬﺎ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻣﻦ ﺍﳌﺪﺧﻞ )1( ﺇﱃ ﺍﳌﺨﺮﺝ )0( ﻭﻣﻦ ﺍﳌﺪﺧﻞ )0( ﺇﱃ‬ ‫ﺍﳌﺨﺮﺝ )1( .‬ ‫ﻳﺘﻢ ﺍﺳﺘﺨﺪﺍﻡ ﻋﺪﺩ ﻣﻦ ﺍﳌﺒﺪﻻﺕ ﻣﻦ ﺃﺟﻞ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﺑﲔ ﻣﻌﺎﳉﲔ ﻣﻦ ﺍﻟﺸﺒﻜﺔ ﻭﳛﺪﺩ ﻃﺮﻳﻖ‬ ‫ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻋﻨﺪ ﺗﻨﻔﻴﺬ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺮﺍﺳﻞ ﻭﻓﻘﺎ ﳉﺎﻫﺰﻳﺔ ﺍﳌﺒﺪﻻﺕ ﻭﳝﻜﻦ ﺍﺳﺘﺨﺪﺍﻡ ﻛﻞ ﻣﺒﺪﻟﺔ ﻣﻦ‬ ‫ﹰ‬ ‫ﺃﺟﻞ ﻃﺮﻳﻘﻲ ﺍﺗﺼﺎﻝ ﳐﺘﻠﻔﲔ ﺷﺮﻁ ﺃﻥ ﻻ ﻳﺸﺘﺮﻙ ﻫﺬﺍﻥ ﺍﻟﻄﺮﻳﻘﺎﻥ ﺑﺎﳌﺪﺍﺧﻞ ﻭﺍﳌﺨﺎﺭﺝ. ﻻ ﺗﺆﻣﻦ‬ ‫ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﺍﻟﺮﺑﻂ ﺑﲔ ﺃﻱ ﻣﻌﺎﳉﲔ ﰲ ﺃﻱ ﳊﻈﺔ . ﻓﻬﻲ ﺷﺒﻜﺔ ﳑﺎﻧﻌﺔ ﺗـﺆﺧﺮ ﺗﻨﻔﻴـﺬ ﺑﻌـﺾ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫15‬ ‫ﺍﻻﺗﺼﺎﻻﺕ ﺣﱴ ﲢﺮﺭ ﺑﻌﺾ ﺍﳌﺒﺪﻻﺕ . ﻭﻣﻦ ﺍﳌﻤﻜـﻦ ﺗﺼﻤﻴﻢ ﻣﺼﻔﻮﻓﺔ ﻣﺒﺪﻻﺕ ﻏﲑ ﳑﺎﻧﻌـﺔ‬ ‫ﻭﻟﻜﻦ ﺣﺠﻢ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺔ ﻳﻜﻮﻥ ﻛﺒﲑﹰﺍ ﺟﺪﹰﺍ ﻭﻛﺬﻟﻚ ﺗﻜﻠﻔﺘﻬﺎ . ﲤﻠﻚ ﺑﻌﺾ ﺍﳌﺼﻔﻮﻓﺎﺕ ﺧﺎﺻﻴﺔ‬ ‫ﻗﺎﺑﻠﻴﺔ ﺇﻋﺎﺩﺓ ﺍﻟﺘﺸﻜﻴﻞ ﻓﺘﻘﻮﻡ ﺍﻟﺸﺒﻜﺔ ﺑﺘﻐﻴﲑ ﺑﻌﺾ ﻃﺮﻕ ﺍﻻﺗﺼﺎﻝ ﺍﻟﱵ ﻫﻲ ﻗﻴﺪ ﺍﻟﺘﻨﻔﻴﺬ ﻣﻦ ﺃﺟـﻞ‬ ‫ﺇﻧﺸﺎﺀ ﻃﺮﻕ ﺟﺪﻳﺪﺓ ﻟﻼﺗﺼﺎﻝ . ﻭ ﻃﺒﻌﺎ ﲣﻔﻒ ﻫﺬﻩ ﺍﳋﺎﺻﻴﺔ ﻣﻦ ﺗﺄﺛﲑ ﺍﳌﻤﺎﻧﻌـﺔ. ﻳـﺆﺛﺮ ﻋـﺪﺩ‬ ‫ﹰ‬ ‫ﺍﳌﺒﺪﻻﺕ ﺍﻟﱵ ﲡﺘﺎﺯﻫﺎ ﺭﺳﺎﻟﺔ ﻣﺎ ﺗﺄﺛﲑﹰﺍ ﻣﺒﺎﺷﺮﹰﺍ ﻋﻠﻰ ﺯﻣﻦ ﺗﻨﻔﻴﺬ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺮﺍﺳﻞ ﻭﻟﺬﺍ ﻧﺴﻌﻰ ﻋﺎﺩﺓ‬ ‫ﺇﱃ ﺍﺧﺘﻴﺎﺭ ﺃﻗﺼﺮ ﺍﻟﻄﺮﻕ .‬ ‫اﻟﺸﻜﻞ )12-2( : ﻣﺼﻔﻮﻓﺔ اﻟﻤﺒﺪﻻت‬ ‫3.2.2.2 اﻟﺸﺒﻜﺎت ﻣﺘﻌﺪدة اﻟﻄﺒﻘﺎت‬ ‫ﺗﺘﺄﻟﻒ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﳌﺒﺪﻻﺕ ﻭﺗﻘﻮﻡ ﺑﺸﻜﻞ ﺗﺼﺎﻋﺪﻱ ﺑﺈﺟﺮﺍﺀ ﻋﻤﻠﻴﺎﺕ ﺿـﺮﺏ‬ ‫ﻋﻠﻰ ﺍﻟﺸﺒﻜﺎﺕ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫25‬ ‫اﻟﺸﻜﻞ )22-2( : ﺷﺒﻜﺔ ﻣﺘﻌﺪد اﻟﻄﺒﻘﺎت‬ ‫ﻭ ﺗﺘﺄﻟﻒ ﻛﻞ ﺷﺒﻜﺔ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﻟﻄﺒﻘﺎﺕ ﻣﻊ ﻋﺪﺩ ﻣﺪﺍﺧﻞ ﺍﻟﺸﺒﻜﺔ ﻭﳐﺎﺭﺟﻬـﺎ . ﻳﻮﺟـﺪ‬ ‫ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﻄﺮﻕ ﻟﺒﻨﺎﺀ ﻫﺬﻩ ﺍﻟﺸﺒﻜﺎﺕ ﻭﲣﺘﻠﻒ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻨﺎﲡﺔ ﲞﻮﺍﺹ ﻋﻤﻠﻴﺔ ﺍﻻﺗﺼﺎﻝ .‬ ‫ﺃﻫﻢ ﻧﺘﺎﺋﺞ ﺍﻟﺪﺭﺍﺳﺎﺕ ﺍﻟﺮﻳﺎﺿﻴﺔ ﺣﻮﻝ ﻫﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﻟﺸﺒﻜﺎﺕ ﺗﻠﻚ ﺍﻟﱵ ﺗﺘﻌﻠـﻖ ﺑﺎﳌﻤﺎﻧﻌـﺔ .‬ ‫ﻓﻴﱪﻫﻦ ﺭﻳﺎﺿﻴﺎ ﺃﻧﻪ ﻟﺒﻨﺎﺀ ﺷﺒﻜﺔ ﻏﲑ ﳑﺎﻧﻌﺔ ﳓﺘﺎﺝ ﺇﱃ )‪ n.log2 (n‬ﻣﺒﺪﻟﺔ ﻭﺫﻟﻚ ﺑﺎﻋﺘﺒـﺎﺭ ‪n‬‬ ‫ﹰ‬ ‫ﻋﺪﺩ ﺍﳌﺪﺍﺧﻞ ﻭﻋﺪﺩ ﺍﳌﺨﺎﺭﺝ ) ﺷﺒﻜﺔ ﻟﺘﺄﻣﲔ ﺍﻻﺗﺼﺎﻝ ﺑﲔ ‪ n‬ﻣﻌﺎﰿ ( . ﻓﻨﺤﺘﺎﺝ ﺇﱃ ﺷﺒﻜﺔ ﻣﻦ‬ ‫٤٢ ﻣﺒﺪﻟﺔ ﻣﻦ ﺃﺟﻞ ﺷﺒﻜﺔ ﺗﺼﻞ ﺑﲔ ٨ ﻣﻌﺎﳉﺎﺕ ، ﺍﻧﻈﺮ ﺍﻟﺸﻜﻞ )22-2(. ﻳﻌﺘﻤﺪ ﺍﳌـﺼﻤﻤﻮﻥ‬ ‫ﻋﻠﻰ ﺷﺒﻜﺎﺕ ﺃﺻﻐﺮ ﺣﺠﻤﺎ ﻭﺫﺍﺕ ﺧﻮﺍﺹ ﻣﻘﺒﻮﻟـﺔ ﺑﺒﻨـﺎﺀ ﺷـﺒﻜﺎﺕ ﺫﺍﺕ ﻋـﺪﺩ ﻃﺒﻘـﺎﺕ‬ ‫ﹰ‬ ‫ﻳﺴﺎﻭﻱ)‪ log2 (n‬ﻭﲝﻴﺚ ﻳﻜﻮﻥ ﻋﺪﺩ ﺍﳌﺪﺍﺧﻞ ﻭﻋﺪﺩ ﺍﳌﺨﺎﺭﺝ ﻣﺴﺎﻭﻳﺎ ﻟﻌـﺪﺩ ﺍﳌﻌﺎﳉـﺎﺕ .‬ ‫ﹰ‬ ‫ﺗﺴﺘﺨﺪﻡ ﺃﻳﻀﺎ ﺑﻌﺾ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻹﻋﺎﺩﺓ ﺗﺸﻜﻴﻞ ﻃﺮﻕ ﺍﻻﺗﺼﺎﻝ ﳑﺎ ﳜﻔﻒ ﻣﻦ ﺍﳌﻤﺎﻧﻌﺔ .‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫35‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻟﺚ:ﻣﺒﺎﺩﺉ ﺗﺼﻤﻴﻢ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫ﻳﻌﺘﱪ ﺗﻄﻮﻳﺮ ﺍﳋﻮﺭﺯﻣﻴﺎﺕ ﻋﻨﺼﺮﹰﺍ ﻫﺎﻣﺎ ﰲ ﺣﻞ ﺍﳌﺴﺎﺋﻞ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﳊﺎﺳـﺒﺎﺕ ﺍﻵﻟﻴـﺔ.‬ ‫ﹰ‬ ‫ﻭﻫﻨﺎﻙ ﻧﻮﻋﺎﻥ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ؛ ﺧﻮﺍﺭﺯﻣﻴﺎﺕ ﺗﺴﻠﺴﻠﻴﺔ، ﻭﺧﻮﺍﺭﺯﻣﻴـﺎﺕ ﻣﺘﻮﺍﺯﻳـﺔ، ﻭﳝﻜـﻦ‬ ‫ِ‬ ‫ﺍﻟﺘﻌﺮﻳﻒ ﺑﺸﻜﻞ ﻣﺒﺴﻂ ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﺑﺄﻬﻧﺎ ﻭﺻﻒ )ﺃﻭ ﺗﺴﻠﺴﻞ ﻣـﻦ ﺍﳋﻄـﻮﺍﺕ‬ ‫ﺍﻷﻭﻟﻴﺔ( ﳊﻞ ﻣﺴﺄﻟﺔ ﻣﻌﻄﺎﺓ ﻋﻠﻰ ﺣـﺎﺳﺐ ﺗﺴـﻠﺴﻠﻲ، ﺃﻣﺎ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫‪(Parallel‬‬ ‫)‪ Algorithms‬ﻓﻬﻲ ﺗﺼﻒ ﻛﻴﻔﻴﺔ ﺍﳊﻞ ﳌﺴﺄﻟﺔ ﻣﻌﻄﺎﺓ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻋﺪﺓ ﻣﻌﺎﳉـﺎﺕ. ﺇﻥ ﺗـﺼﻤﻴﻢ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻳﺘﻄﻠﺐ ﺃﻛﺜﺮ ﻣﻦ ﳎﺮﺩ ﲢﺪﻳﺪ ﺧﻄﻮﺍﺕ ﺍﳊـﻞ، ﻓﻌﻠـﻰ ﺃﻗـﻞ ﺗﻘـﺪﻳﺮ،‬ ‫ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺑﻌﺪ ﺇﺿﺎﰲ ﻟﻠﺘﺰﺍﻣﻦ، ﻭﻋﻠﻰ ﻣﺼﻤﻢ ﺍﳋﻮﺍﺭﺯﻣّﻴﺔ ﺃﻥ ﳛﺪﺩ ﳎﻤﻮﻋـﺔ ﻣـﻦ‬ ‫ﺍﳋﻄﻮﺍﺕ ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺗﻨﻔﻴﺬﻫﺎ ﺳﻮﻳﺎ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ. ﻭﻫﺬﺍ ﺍﻟﺘﺤﺪﻳﺪ ﺿﺮﻭﺭﻱ ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺃﻱ‬ ‫ﹰ‬ ‫ﺯﻳﺎﺩﺓ ﰲ ﺍﻷﺩﺍﺀ ﻣﻦ ﺍﺳﺘﻌﻤﺎﻝ ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ. ﳝﻜﻦ ﲢﺪﻳﺪ ﺍﳌﻴـﺰﺍﺕ ﺍﻟـﱵ ﺗﺘـﺼﻒ ﻬﺑـﺎ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺃﻭ ﰲ ﺑﻌﺾ ﻣﻨﻬﺎ ﻋﻠﻰ ﺍﻟﻨﺤﻮ ﺍﻵﰐ:‬ ‫•‬ ‫ﲢﺪﻳﺪ ﺃﺟﺰﺍﺀ ﺍﻟﻌﻤﻞ ﺍﻟﺬﻱ ﳝﻜﻦ ﺃﻥ ﻳﺆﺩﻯ ﺑﺸﻜﻞ ﻣﺘﺰﺍﻣﻦ.‬ ‫•‬ ‫ﺗﻮﺿﻴﻊ )ﺃﻱ ﺇﺳﻨﺎﺩ( ﺍﻷﺟﺰﺍﺀ ﺍﳌﺘﺰﺍﻣﻨﺔ ﻣـﻦ ﺍﻟﻌﻤـﻞ ﰲ ﻋـﺪﺓ ﺇﺟﺮﺍﺋﻴـﺎﺕ)ﺃﻭ‬ ‫ﻣﻌﺎﳉﺎﺕ( ﺗﻌﻤﻞ ﺑﺎﻟﺘﻮﺍﺯﻱ.‬ ‫•‬ ‫ﺗﻮﺯﻳﻊ ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﺪﺧﻠﺔ ﻭﺍﳌﺨﺮﺟﺔ ﻭﺍﳌﻌﻄﻴﺎﺕ ﺍﻟﻮﺳﻴﻄﺔ ﺍﳌﺮﺗﺒﻄﺔ ﺑﺎﻟﱪﻧﺎﻣﺞ.‬ ‫•‬ ‫ﺇﺩﺍﺭﺓ ﻋﻤﻠﻴﺔ ﺍﻟﻮﺻﻮﻝ ﺇﱃ ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﺸﺘﺮﻛﺔ ﺑﲔ ﻋﺪﺓ ﺇﺟﺮﺍﺋﻴﺎﺕ.‬ ‫•‬ ‫ﻣﺰﺍﻣﻨﺔ ﺍﳌﻌﺎﳉﺎﺕ ﰲ ﺍﳌﺮﺍﺣﻞ ﺍﳌﺨﺘﻠﻔﺔ ﻟﺘﺸﻐﻴﻞ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫45‬ ‫1.3 ﻣﻔﺎهﻴﻢ أﺳﺎﺳﻴﺔ‬ ‫• ‪‬א‪‬א‪‬א‪‬א‪ W‬ﻫﻨﺎﻙ ﺧﻄﻮﺗﺎﻥ ﺭﺋﻴﺴﻴﺘﺎﻥ ﻣﺴﺘﺨﺪﻣﺘﺎﻥ ﰲ ﺗـﺼﻤﻴﻢ‬ ‫‪‬‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ: ﺍﻷﻭﱃ- ﺗﻘﺴﻴﻢ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺇﱃ ﻋﺪﺓ ﻋﻤﻠﻴﺎﺕ ﺣـﺴﺎﺑﻴﺔ ﺻـﻐﲑﺓ،‬ ‫ﻭﺍﻟﺜﺎﻧﻴﺔ- ﻭﺿﻊ ﻫﺬﻩ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳌﻘﺴﻤﺔ ﰲ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ ﻟﻜﻲ ﺗﻨﻔﺬ ﺑﺸﻜﻞ ﻣﺘﻮﺍﺯ. ﻭﺳـﻮﻑ‬ ‫ٍ‬ ‫ﻧﻌﺮﺽ ﻻﺣﻘﺎ ﻫﺎﺗﲔ ﺍﳋﻄﻮﺗﲔ ﺍﳌﺴﺘﺨﺪﻣﺘﲔ ﰲ ﺗﺼﻤﻴﻢ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳـﺔ ﻣـﻦ ﺧـﻼﻝ‬ ‫ﹰ‬ ‫ﺍﳌﺜﺎﻟﲔ: ﺿﺮﺏ ﻣﺼﻔﻮﻓﺔ ﺑﺸﻌﺎﻉ، ﻭﻋﻤﻠﻴﺔ ﺍﺳﺘﻌﻼﻡ ﰲ ﻗﻮﺍﻋﺪ ﺍﻟﺒﻴﺎﻧﺎﺕ، ﺇﺿﺎﻓﺔ ﺇﱃ ﺍﻟﺘﻌﺮﻳﻒ ﺑﻌﺪﺩ‬ ‫ﻣﻦ ﺍﳌﻔﺎﻫﻴ ﺍﳌﺴﺘﺨﺪﻣﺔ.‬ ‫• א‪ :(Decomposition)‬ﻫﻮ ﻋﻤﻠﻴﺔ ﺗﻔﻜﻴﻚ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺇﱃ ﺃﺟﺰﺍﺀ ﺃﺻﻐﺮ.‬ ‫• א‪ :(Tasks)‬ﻫﻲ ﻭﺣﺪﺍﺕ ﻣﻦ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﻣﻌﺮﻓﺔ ﺑﻮﺍﺳﻄﺔ ﺍﳌﱪﻣﺞ ﻭﺍﻟﱵ ﲤﺜﻞ ﺃﺟﺰﺍﺀ‬ ‫‪‬‬ ‫ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺍﻟﺮﺋﻴﺴﻴﺔ ﺍﻟﱵ ﰎ ﺍﳊﺼﻮﻝ ﻋﻠﻴﻬـﺎ ﺑﻮﺍﺳـﻄﺔ ﺍﻟﺘﻘـﺴﻴﻢ. ﻭﺍﻟﺘﻨﻔﻴـﺬ ﺍﳌﺘـﺰﺍﻣﻦ‬ ‫)‪)(Concurrency‬ﺃﻱ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ( ﻟﻠﻤﻬﺎﻡ ﺍﳌﺘﻌﺪﺩﺓ ﻫﻮ ﺍﳌﻔﺘﺎﺡ ﺍﻷﺳﺎﺳﻲ ﻹﻗﻼﻝ ﺍﻟﺰﻣﻦ ﺍﻟﻼﺯﻡ‬ ‫ﳊﻞ ﺍﳌﺴﺄﻟﺔ ﺑﻜﺎﻣﻠﻬﺎ. ﻭﻗﺪ ﺗﻜﻮﻥ ﺍﳌﻬﺎﻡ -ﻟﻠﻤﺴﺄﻟﺔ ﺍﳌﻘﺴﻤﺔ- ﻟﻴﺴﺖ ﻛﻠﻬﺎ ﻣﻦ ﻧﻔﺲ ﺍﳊﺠﻢ.‬ ‫•‬ ‫‪‬א‪Graph)‬‬ ‫‪‬‬ ‫‪ :(Depenency‬ﻫﻲ ﺍﻟﺮﺳﻢ ﺍﻟﺘﺠﺮﻳﺪﻱ ﺍﻟـﺬﻱ ﻳـﺴﺘﺨﺪﻡ‬ ‫ﻟﻠﺘﻌﺒﲑ ﻋﻦ ﺍﻻﻋﺘﻤﺎﺩﻳﺔ ﺃﻭ ﺍﻟﺘﺒﻌﻴﺔ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻬﺎﻡ ﻭ ﺍﻟﺘﺮﺗﻴﺐ/ﺍﻟﻨﻈﺎﻡ ﺍﻟﻨﺴﱯ ﻟﻠﺘﻨﻔﻴﺬ.‬ ‫ﻣﺜﺎل)1-3(: ﺿﺮب ﻣﺼﻔﻮﻓﺔ ﺏﺸﻌﺎع‬ ‫ﺑﺎﻋﺘﺒﺎﺭ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﺟﺮﺍﺀ ﻋﻤﻠﻴﺔ ﺍﻟﻀﺮﺏ ﻋﻠﻰ ﺍﳌﺼﻔﻮﻓﺔ ‪ A‬ﲝﺠـﻢ ‪ n×n‬ﻣـﻊ ﺍﻟـﺸﻌﺎﻉ ‪،b‬‬ ‫ﻓﺴﻴﻨﺘﺞ ﻟﺪﻳﻨﺎ ﺷﻌﺎﻉ ﺁﺧﺮ ‪ .y‬ﺇﻥ ﺣﺎﺻﻞ ﻋﻤﻠﻴﺔ ﺍﻟﻀﺮﺏ ]‪ y[i‬ﻳﻜﻮﻥ ﻧﺎﲡﺎ ﻋﻦ ﺿﺮﺏ ﺍﻟـﺴﻄﺮ ‪i‬‬ ‫ﹰ‬ ‫ﻭﻛﻤﺎ ﻫﻮ ﻣﻮﺿـﺢ ﰲ‬ ‫ﻣﻦ ‪ A‬ﻣﻊ ﻛﺎﻣﻞ ﺍﻟﺸﻌﺎﻉ ‪ .b‬ﻭﻟﻺﻳﻀﺎﺡ ﻓﺈﻥ:‬ ‫ﺍﻟﺸﻜﻞ)1-3( ﻓﺈﻥ ﻋﻤﻠﻴﺔ ﺣﺴﺎﺏ ﻛﻞ ﻗﻴﻤﺔ ]‪ y[i‬ﳝﻜﻦ ﺃﻥ ﺗﻌﺘﱪ ﻛﻤﻬﻤﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫55‬ ‫ﻛﻤﺎ ﳝﻜﻦ ﺗﻘﺴﻴﻢ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺇﱃ ﻋﺪﺓ ﻣﻬﺎﻡ ﻛﻤﺎ ﰲ ﺍﻟﺸﻜﻞ )4-3( ﺣﻴﺚ ﻗﺴﻤﺖ ﺍﳌﺴﺄﻟﺔ‬ ‫ﺇﱃ ٤ ﻣﻬﺎﻡ، ﲝﻴﺚ ﺃﻥ ﻛﻞ ﻣﻬﻤﺔ ﺗﻘﻮﻡ ﲝﺴﺎﺏ 4/‪ n‬ﻣﻦ ﺍﻟﻨﺎﺗﺞ.‬ ‫اﻟﺸﻜﻞ)1-3(: ﻣﺴﺄﻟﺔ ﺿﺮب ﻣﺼﻔﻮﻓﺔ ﺏﺸﻌﺎع ﻣﻘﺴﻤﺔ إﻟﻰ ‪ n‬ﻣﻬﻤﺔ، ﺡﻴﺚ ‪ n‬هﻲ ﻋﺪد أﺳﻄﺮ اﻟﻤﺼﻔﻮﻓﺔ. اﻟﺠﺰء‬ ‫اﻟﺬي ﺕﺘﻌﺎﻣﻞ ﻣﻌﻪ)ﻣﺪﺥﻼت وﻣﺨﺮﺟﺎت( ﻟﻠﻤﻬﻤﺔ١ ﻣﻮﺿﺢ ﺏﺎﻟﻠﻮن اﻟﻐﺎﻣﻖ.‬ ‫ﻧﻼﺣﻆ ﺑﺄﻥ ﲨﻴﻊ ﺍﳌﻬﺎﻡ ﺍﶈﺪﺩﺓ ﰲ ﺍﻟﺸﻜﻞ)1-3( ﻫﻲ ﻣﻬﺎﻡ ﻣﺴﺘﻘﻠﺔ ﻭﳝﻜﻦ ﺗﻨﻔﻴﺬﻫﺎ ﺳﻮﻳﺎ ﺃﻭ‬ ‫ﹰ‬ ‫ﻋﻠﻰ ﺃﻱ ﺗﺴﻠﺴﻞ. ﻭﺑﺸﻜﻞ ﻋﺎﻡ، ﰲ ﺑﻌﺾ ﺍﳌﺴﺎﺋﻞ ﻗﺪ ﺗﻜﻮﻥ ﺑﻌﺾ ﺍﳌﻬﺎﻡ ﻓﻴﻬـﺎ ﲝﺎﺟـﺔ ﺇﱃ‬ ‫ﺑﻴﺎﻧﺎﺕ ﻧﺎﲡﺔ ﻋﻦ ﻣﻬﺎﻡ ﺃﺧﺮﻯ ﻭﻟﺬﺍ ﻓﺈﻥ ﻋﻠﻴﻬﺎ ﺍﻻﻧﺘﻈﺎﺭ ﺇﱃ ﺃﻥ ُﺗﻨﻬﻲ ﻫﺬﻩ ﺍﳌﻬﺎﻡ ﺃﻋﻤﺎﳍﺎ.‬ ‫ﻭﰲ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﺗﻌﺘﱪ ﺍﻟﻌﻘﺪ ﻛﻤﻬﺎﻡ، ﺃﻣﺎ ﺍﳋﻄﻮﻁ ﺍﻟﱵ ﺗﺼﻞ ﺑﲔ ﺍﻟﻌﻘﺪ)ﻳﻄﻠـﻖ ﻋﻠﻴﻬـﺎ‬ ‫ﺃﺿﻼﻉ( ﻓﺘﺪﻝ ﻋﻠﻰ ﺍﻻﻋﺘﻤﺎﺩﻳﺔ ﺑﲔ ﺍﳌﻬﺎﻡ. ﻓﺎﳌﻬﻤﺔ ﺍﻟﱵ ﺗﺘﻄﺎﺑﻖ ﻣﻊ ﺃﺣﺪ ﺍﻟﻌﻘﺪ ﻻ ﳝﻜﻦ ﺗﻨﻔﻴﺬﻫﺎ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫65‬ ‫ﺇﻻ ﺣﲔ ﺍﻧﺘﻬﺎﺀ ﺗﻨﻔﻴﺬ ﲨﻴﻊ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﺗﺪﺧﻞ ﺇﻟﻴﻬﺎ)ﺃﻱ ﺃﻥ ﺍﳋﻂ ﺍﻟﻮﺍﺻﻞ ﻳﻜﻮﻥ ﺩﺍﺧﻞ ﻟﻠﻤﻬﻤـﺔ‬ ‫ﻭﻟﻴﺲ ﺧﺎﺭﺟﺎ ﻣﻨﻬﺎ(.‬ ‫ﻣﺜﺎل)2-3( إﺟﺮاﺋﻴﺔ اﻻﺳﺘﻌﻼم ﻣﻦ ﻗﻮاﻋﺪ اﻟﺒﻴﺎﻥﺎت‬ ‫ﻳﻮﺟﺪ ﰲ ﺍﳉﺪﻭﻝ )1-3( ﻋﺮﺽ ﻟﻘﺎﻋﺪﺓ ﺑﻴﺎﻧﺎﺕ ﻋﻼﺋﻘﻴﺔ ﺧﺎﺻﺔ ﺑﺴﻴﺎﺭﺍﺕ، ﻭﻛﻞ ﺻﻒ ﰲ‬ ‫ﻫﺬﺍ ﺍﳉﺪﻭﻝ ﻫﻮ ﺳﺠﻞ ﳛﺘﻮﻱ ﻋﻠﻰ ﺑﻴﺎﻧﺎﺕ ﻋﻦ ﺳﻴﺎﺭﺓ ﳏﺪﺩﺓ، ﻣﺜﻞ ﺍﳌﻌﺮﻑ ‪ ،ID‬ﻭﺳﻨﺔ ﺍﻹﻧﺘﺎﺝ‬ ‫َﱢ‬ ‫‪ ،year‬ﻭﺍﻟﻠﻮﻥ ‪ ،color‬ﺍﱁ..‬ ‫اﻟﺠﺪول )1-3(: ﻗﺎﻋﺪة ﺏﻴﺎﻥﺎت ﻟﺘﺨﺰﻳﻦ ﻣﻌﻠﻮﻣﺎت ﻋﻦ اﻟﺴﻴﺎرات.‬ ‫ﻟﻨﻔﺘﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﺟﺮﺍﺀ ﺍﻻﺳﺘﻌﻼﻡ ﺍﻟﺘﺎﱄ:‬ ‫‪MODEL="Civic" AND YEAR="2001" AND (COLOR="Green" OR‬‬ ‫)"‪COLOR="White‬‬ ‫ﻳﻘﻮﻡ ﻫﺬﺍ ﺍﻻﺳﺘﻌﻼﻡ ﺑﺎﻟﺒﺤﺚ ﻋﻦ ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺍﻟﱵ ﻣﻦ ﺍﻟﻨﻮﻉ ‪ Civic‬ﻭﺍﻟﱵ ﺃﻧﺘﺠـﺖ ﰲ‬ ‫ﺍﻟﺴﻨﺔ ١٠٠٢ ﻭﳍﺎ ﺃﺣﺪ ﺍﻟﻠﻮﻧﲔ: ﺍﻷﺧﻀﺮ ﺃﻭ ﺍﻷﺑﻴﺾ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫75‬ ‫ﰲ ﻗﻮﺍﻋﺪ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﻌﻼﺋﻘﻴﺔ )‪ ،(Relational Database‬ﻳﺘﻢ ﻫﺬﺍ ﺍﻻﺳﺘﻌﻼﻡ ﺑﻮﺍﺳﻄﺔ‬ ‫ﺇﻧﺸﺎﺀ ﻋﺪﺩ ﻣﻦ ﺍﳉﺪﺍﻭﻝ ﺍﻟﻮﺳﻴﻄﺔ، ﻭﺇﺣﺪﻯ ﺍﻟﻄﺮﻕ ﺍﳌﻤﻜﻨﺔ ﳍﺬﻩ ﺍﳌﺴﺄﻟﺔ ﻫﻲ ﺇﻧﺸﺎﺀ ﺍﳉﺪﺍﻭﻝ‬ ‫ﺍﻷﺭﺑﻌﺔ ﺍﻟﺘﺎﻟﻴﺔ:‬ ‫⇐ ﺟﺪﻭﻝ ﳛﺘﻮﻱ ﻋﻠﻰ ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﻣﻦ ﻧﻮﻉ ‪.Civic‬‬ ‫⇐ ﺟﺪﻭﻝ ﳛﺘﻮﻱ ﻋﻠﻰ ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺍﻟﱵ ﺃﻧﺘﺠﺖ ﰲ ﻋﺎﻡ ١٠٠٢.‬ ‫⇐ ﺟﺪﻭﻝ ﳛﺘﻮﻱ ﻋﻠﻰ ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺫﺍﺕ ﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ.‬ ‫⇐ ﺟﺪﻭﻝ ﳛﺘﻮﻱ ﻋﻠﻰ ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺫﺍﺕ ﺍﻟﻠﻮﻥ ﺍﻷﺑﻴﺾ.‬ ‫ﰒ ﺑﻌﺪ ﺫﻟﻚ ﺗﺘﻢ ﺍﻟﻌﻤﻠﻴﺔ ﺑﻮﺍﺳﻄﺔ ﺩﻣﺞ ﻫﺬﻩ ﺍﳉﺪﺍﻭﻝ ﻋﻦ ﻃﺮﻳﻖ ﺣﺴﺎﺏ ﺍﻟﺘﻘﺎﻃﻌـﺎﺕ ﺃﻭ‬ ‫ﺍﻻﲢﺎﺩﺍﺕ ﺑﲔ ﺍﳉﺪﺍﻭﻝ ﺯﻭﺟﺎ ﺯﻭﺟﺎ. ﻭﻋﻠﻰ ﻭﺟﻪ ﺍﻟﺘﺤﺪﻳﺪ ﺳﻴﺘﻢ ﺣﺴﺎﺏ ﺍﻟﺘﻘﺎﻃﻊ ﻟﻠﺠـﺪﻭﻟﲔ‬ ‫ﹰﹰ‬ ‫"ﺍﻟﺴﻴﺎﺭﺍﺕ ﻣﻦ ﻧﻮﻉ ‪ "Civic‬ﻭ "ﺍﻟﺴﻴﺎﺭﺍﺕ ﺍﻟﱵ ﺃﻧﺘﺠﺖ ﻋﺎﻡ ١٠٠٢" ﻭﺫﻟﻚ ﻹﻧﺸﺎﺀ ﺟـﺪﻭﻝ‬ ‫ﳛﺘﻮﻱ ﻋﻠﻰ ﺳﻴﺎﺭﺍﺕ ‪ Civic‬ﺍﻟﱵ ﺃﻧﺘﺠﺖ ﻋﺎﻡ ١٠٠٢.‬ ‫ﻭﺑﻨﻔﺲ ﺍﻷﺳﻠﻮﺏ، ﺳﻴﺘﻢ ﺇﺟﺮﺍﺀ ﺍﲢﺎﺩ ﳉﺪﻭﱄ "ﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ" ﻭ "ﺍﻟﻠﻮﻥ ﺍﻷﺑﻴﺾ" ﻭﺫﻟـﻚ‬ ‫ﻟﻜﻲ ﻳﺘﻢ ﺇﻧﺸﺎﺀ ﺟﺪﻭﻝ ﳉﻤﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺫﺍﺕ ﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ ﺃﻭ ﺍﻷﺑﻴﺾ. ﻭﰲ ﺍﻟﻨﻬﺎﻳﺔ ﺳـﻴﺘﻢ‬ ‫ﺇﺟﺮﺍﺀ ﺍﻟﺘﻘﺎﻃﻊ ﻟﻠﺠﺪﻭﻝ ﺍﻟﺬﻱ ﳛﺘﻮﻱ ﻋﻠﻰ ﺳﻴﺎﺭﺍﺕ 1002 ‪ Civic‬ﻣﻊ ﺍﳉﺪﻭﻝ ﺍﻟﺬﻱ ﳛﺘﻮﻱ ﻋﻠﻰ‬ ‫ﲨﻴﻊ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺍﳋﻀﺮﺍﺀ ﺃﻭ ﺍﻟﺒﻴﻀﺎﺀ ﺍﻟﻠﻮﻥ، ﻭﺑﺬﻟﻚ ﻳﺘﻢ ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﻧﺘﻴﺠﺔ ﺍﻻﺳﺘﻌﻼﻡ.‬ ‫ﳝﻜﻦ ﻟﻠﺤﺴﺎﺑﺎﺕ ﺍﳌﺨﺘﻠﻔﺔ ﺍﻟﱵ ﺍﺳﺘﺨﺪﻣﺖ ﳌﻌﺎﳉﺔ ﺍﻻﺳﺘﻌﻼﻡ ﰲ ﺍﳌﺜﺎﻝ ﺍﻟـﺴﺎﺑﻖ ﺃﻥ ﲤﺜـﻞ‬ ‫ﺑﻮﺍﺳﻄﺔ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﺍﳌﻮﺿﺢ ﰲ ﺍﻟﺸﻜﻞ)2-3(.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫85‬ ‫اﻟﺸﻜﻞ)2-3(: اﻟﺠﺪاول اﻟﻤﺨﺘﻠﻔﺔ واﻟﻌﻼﻗﺔ ﺏﻴﻨﻬﺎ ﻓﻲ ﻋﻤﻠﻴﺔ اﻻﺳﺘﻌﻼم.‬ ‫ﻛﻞ ﻋﻘﺪﺓ ﰲ ﺍﻟﺸﻜﻞ ﲤﺜﻞ ﻣﻬﻤﺔ ﺗﺘﻄﺎﺑﻖ ﻣﻊ ﺟﺪﻭﻝ ﻭﺳﻴﻂ ﲝﺎﺟﺔ ﺇﱃ ﺃﻥ ُﺤـﺴﺐ، ﺃﻣـﺎ‬ ‫ﻳ‬ ‫ﺍﻷﺳﻬﻢ ﺍﻟﱵ ﺑﲔ ﺍﻟﻌﻘﺪ ﻓﺘﻮﺿﺢ ﺍﻟﻌﻼﻗﺔ )ﺃﻭ ﺍﻟﺘﺒﻌﻴﺔ( ﺑﲔ ﺍﳌﻬﺎﻡ. ﻓﻌﻠﻰ ﺳـﺒﻴﻞ ﺍﳌﺜـﺎﻝ، ﻗﺒـﻞ ﺃﻥ‬ ‫ﻧﺘﻤﻜﻦ ﻣﻦ ﺣﺴﺎﺏ ﺍﳉﺪﻭﻝ ﺍﻟﺬﻱ ﳛﺘﻮﻱ ﻋﻠﻰ 1002 ‪ Civic‬ﳚﺐ ﺃﻭﻻ ﺃﻥ ﻧﻘـﻮﻡ ﲝـﺴﺎﺏ‬ ‫ﺍﳉﺪﻭﻟﲔ "ﺳﻴﺎﺭﺍﺕ ‪ "Civic‬ﻭ "ﺳﻴﺎﺭﺍﺕ ١٠٠٢".‬ ‫ﻳﻮﺟﺪ ﻋﺪﺓ ﻃﺮﻕ ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺑﻌﺾ ﺍﳊﺴﺎﺑﺎﺕ، ﻭﺧـﺼﻮﺻﺎ ﺗﻠـﻚ ﺍﻟـﱵ ﺗـﺴﺘﺨﺪﻡ‬ ‫ﺍﳌﻌﺎﻣﻼﺕ، ﻣﺜﻞ: ﺍﳉﻤﻊ ، ﺍﻟﻀﺮﺏ ﺃﻭ ‪ AND‬ﻭ ‪ OR‬ﺍﳌﻨﻄﻘﻴﺘﺎﻥ. ﻓـﺎﻟﻄﺮﻕ ﺍﳌﺨﺘﻠﻔـﺔ ﻟﺘﺮﺗﻴـﺐ‬ ‫ﺍﳊﺴﺎﺑﺎﺕ ﺗﺆﺩﻱ ﺇﱃ ﳐﻄﻄﺎﺕ ﺗﺒﻌﻴﺔ ﳐﺘﻠﻔﺔ ﻭﺫﺍﺕ ﻣﺰﺍﻳﺎ ﳐﺘﻠﻔﺔ ﺃﻳﻀﺎ. ﻭﻟﻠﺪﻻﻟﺔ ﻋﻠـﻰ ﺫﻟـﻚ،‬ ‫ﹰ‬ ‫ﻓﺎﺳﺘﻌﻼﻡ ﻗﺎﻋﺪﺓ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﻮﺍﺭﺩ ﰲ ﺍﳌﺜﺎﻝ)2-3( ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺣﻠﻪ ﺑﺎﻷﺳﻠﻮﺏ ﺍﻵﰐ:‬ ‫ﺃﻭﻻ: ﲢﺪﻳﺪ ﺟﺪﻭﻝ ﳛﺘﻮﻱ ﻋﻠﻰ ﺍﻟﺴﻴﺎﺭﺍﺕ ﺫﺍﺕ ﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ ﺃﻭ ﺍﻷﺑﻴﺾ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫95‬ ‫ﺛﺎﻧﻴﺎ: ﺇﺟﺮﺍﺀ ﺗﻘﺎﻃﻊ ﳉﺪﻭﻝ "ﺍﻟﺴﻴﺎﺭﺍﺕ ﺫﺍﺕ ﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ ﺃﻭ ﺍﻷﺑﻴﺾ" ﻣـﻊ ﺍﳉـﺪﻭﻝ:‬ ‫"ﺳﻴﺎﺭﺍﺕ ﺃﻧﺘﺠﺖ ﰲ ﻋﺎﻡ ١٠٠٢"‬ ‫ﺛﺎﻟﺜﺎ: ﺗﺪﻣﺞ ﺍﻟﻨﺘﺎﺋﺞ ﻣﻊ ﺟﺪﻭﻝ "ﺳﻴﺎﺭﺍﺕ ‪."Civic‬‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)3-3( ﻫﺬﻩ ﺍﳋﻄﻮﺍﺕ ﻣﻦ ﺧﻼﻝ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ:‬ ‫اﻟﺸﻜﻞ)3-3(: ﻣﺨﻄﻂ اﻟﺘﺒﻌﻴﺔ ﻟﻌﻤﻠﻴﺔ اﻻﺳﺘﻌﻼم.‬ ‫<<‬ ‫<<‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫06‬ ‫•‪‬א‪ WEGranularityF‬ﻳﻄﻠﻖ ﻋﻠﻰ ﺣﺠﻢ ﻭﻋﺪﺩ ﺍﳌﻬﺎﻡ ﰲ ﺍﳌﺴﺄﻟﺔ ﺍﺠﻤﻟـﺰﺃﺓ ﺍﻟﺘﻘـﺴﻴﻢ‬ ‫‪‬‬ ‫ﺍﳊﺒﻮﰊ. ﻭﺍﻟﺘﻘﺴﻴﻢ ﺇﱃ ﻋﺪﺩ ﻛﺒﲑ ﻣﻦ ﺍﳌﻬﺎﻡ ﺍﻟﺼﻐﲑﺓ ﻳﻄﻠﻖ ﻋﻠﻴﻪ ﺍﳊﺒﻮﺑﻴﺔ ﺍﻟﻨﺎﻋﻤﺔ. ﺃﻣﺎ ﺍﻟﺘﻘـﺴﻴﻢ‬ ‫ﺇﱃ ﻋﺪﺩ ﺻﻐﲑ ﻣﻦ ﺍﳌﻬﺎﻡ ﺍﻟﻜﺒﲑﺓ ﻓﻴﻄﻠﻖ ﻋﻠﻴﻪ ﺍﳊﺒﻮﺑﻴﺔ ﺍﳋﺸﻨﺔ.‬ ‫ﺍﳌﺜـﺎﻝ)1-3(‬ ‫ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﺍﻟﺘﻘﺴﻴﻢ ﳌﺴﺄﻟﺔ ﻋﻤﻠﻴﺔ ﺿﺮﺏ ﻣﺼﻔﻮﻓﺔ ﺑﺸﻌﺎﻉ ﺍﻟﻮﺍﺭﺩﺓ ﰲ‬ ‫ﺗﻌﺘﱪ ﺗﻘﺴﻴﻤﺎ ﺣﺒﻮﺑﻴﺎ ﻧﺎﻋﻤﺎ ﻭﺫﻟﻚ ﻷﻥ ﻛﻞ ﻣﻬﻤﺔ ﻣﻦ ﺍﳌﻬﺎﻡ ﺍﻟﻜﺜﲑﺓ ﺗﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﻋﻤﻠﻴﺔ ﺍﻟﻀﺮﺏ‬ ‫ﹰﹰ‬ ‫ﹰ‬ ‫ﻟﺴﻄﺮ ﻭﺍﺣﺪ. ﺃﻣﺎ ﰲ ﺍﻟﺸﻜﻞ)4-3( ﻓﻔﻴﻪ ﻋﺮﺽ ﻟﺘﻘﺴﻴﻢ ﻣﻦ ﻧﻮﻉ ﺍﳊﺒﻮﺑﻴﺔ ﺍﳋﺸﻨﺔ ﻟﻨﻔﺲ ﺍﳌﺴﺄﻟﺔ‬ ‫ﺇﱃ ٤ ﻣﻬﺎﻡ، ﲝﻴﺚ ﺗﻘﻮﻡ ﻛﻞ ﻣﻬﻤﺔ ﺑﺘﻨﻔﻴﺬ 4/‪ n‬ﻣﻦ ﺍﻟﻌﻤﻞ ﻟﻜﺎﻣﻞ ﺍﻟﺸﻌﺎﻉ ﺍﻟﻨﺎﺗﺞ.‬ ‫اﻟﺸﻜﻞ)4-3(: ﻣﺴﺄﻟﺔ ﺿﺮب ﻣﺼﻔﻮﻓﺔ ﺏﺸﻌﺎع ﻣﻘﺴﻤﺔ إﻟﻰ أرﺏﻌﺔ ﻣﻬﺎم. اﻟﺠﺰء اﻟﺬي ﺕﺘﻌﺎﻣﻞ ﻣﻌﻪ)ﻣﺪﺥﻼت‬ ‫وﻣﺨﺮﺟﺎت( ﻟﻠﻤﻬﻤﺔ١ ﻣﻮﺿﺢ ﺏﺎﻟﻠﻮن اﻟﻐﺎﻣﻖ.‬ ‫‪‬‬ ‫ﻳﺘﻌﻠﻖ ﻣﻔﻬﻮﻡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳊﺒﻮﰊ ﺑﺪﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ. ﻓﺎﳊﺪ ﺍﻷﻗﺼﻰ ﻟﻌﺪﺩ ﺍﳌﻬﺎﻡ ﺍﻟـﱵ ﳝﻜـﻦ‬ ‫ﺗﻨﻔﻴﺬﻫﺎ ﺑﺸﻜﻞ ﺁﱐ ﰲ ﺑﺮﻧﺎﻣﺞ ﻣﺘﻮﺍﺯﻱ ﰲ ﺃﻱ ﻭﻗﺖ ﻣﻌﻄﻰ ﻳﻄﻠﻖ ﻋﻠﻴﻪ "ﺍﻟﺪﺭﺟـﺔ ﺍﻟﻌﻈﻤـﻰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫16‬ ‫ﻟﻠﺘﺰﺍﻣﻦ". ﻭﰲ ﺃﻏﻠﺐ ﺍﳊﺎﻻﺕ، ﺗﻜﻮﻥ ﺍﻟﺪﺭﺟﺔ ﺍﻟﻘﺼﻮﻯ ﻟﻠﺘﺰﺍﻣﻦ ﺃﻗﻞ ﻣﻦ ﻋﺪﺩ ﺍﳌﻬـﺎﻡ ﺍﻟﻜﻠـﻲ‬ ‫ﻭﺫﻟﻚ ﺑﺴﺒﺐ ﺍﻟﻌﻼﻗﺔ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻬﺎﻡ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﺍﻟﺪﺭﺟﺔ ﺍﻟﻘﺼﻮﻯ ﻟﻠﺘﺰﺍﻣﻦ ﰲ ﳐﻄـﻂ‬ ‫ﺍﻟﺘﺒﻌﻴﺔ ﺍﳌﻮﺿﺢ ﰲ ﺍﻟﺸﻜﻞ )2-3( ﻭ ﺷﻜﻞ )3-3( ﻫﻮ 4. ﻭﰲ ﳐﻄﻄﺎﺕ ﺍﻟﺘﺒﻌﻴﺔ ﻫﺬﻩ، ﲢـﺼﻞ‬ ‫ﺍﻟﺪﺭﺟﺔ ﺍﻟﻘﺼﻮﻯ ﻟﻠﺘﺰﺍﻣﻦ ﰲ ﺍﻟﺒﺪﺍﻳﺔ ﻋﻨﺪ ﻣﺎ ﻳﺘﻢ ﺣﺴﺎﺏ ﺍﳉﺪﺍﻭﻝ ﺍﻷﺭﺑﻌﺔ)ﺍﻟﻨﻮﻉ،ﺍﻟﺴﻨﺔ،ﺍﻟﻠـﻮﻥ‬ ‫ﺍﻷﺑﻴﺾ،ﻭﺍﻟﻠﻮﻥ ﺍﻷﺧﻀﺮ( ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ.‬ ‫ﻭﺑﺸﻜﻞ ﻋﺎﻡ، ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﺍﻟﻌﻈﻤﻰ ﳌﺨﻄﻄﺎﺕ ﺍﻟﺘﺒﻌﻴﺔ ﺍﻟﺸﺠﺮﻳﺔ ﻳﺴﺎﻭﻱ ﺩﺍﺋﻤـﺎ ﻟﻌـﺪﺩ‬ ‫ﺍﻟﺘﻔﺮﻋﺎﺕ ﰲ ﺍﻟﺸﺠﺮﺓ.‬ ‫ﻫﻨﺎﻙ ﻣﺆﺷﺮ ﻫﺎﻡ ﻟﻠﺪﻻﻟﺔ ﻋﻠﻰ ﺃﺩﺍﺀ ﺍﻟﱪﺍﻣﺞ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻫﺬﺍ ﺍﳌﺆﺷﺮ ﻫﻮ "ﻣﺘﻮﺳـﻂ ﺩﺭﺟـﺔ‬ ‫ﺍﻟﺘﺰﺍﻣﻦ"، ﻭﺍﻟﺬﻱ ﳝﻜﻦ ﺣﺴﺎﺑﻪ ﺑﺄﺧﺬ ﺍﳌﺘﻮﺳﻂ ﻟﻌﺪﺩ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﳝﻜﻦ ﺗﻨﻔﻴﺬﻫﺎ ﺗﺰﺍﻣﻨﻴﺎ ﺧـﻼﻝ‬ ‫ﻛﺎﻣﻞ ﻣﺪﺓ ﺗﺸﻐﻴﻞ ﺍﻟﱪﻧﺎﻣﺞ. ﻭﳝﻜﻦ ﺃﻥ ﻳﺰﺩﺍﺩ ﻛﻞ ﻣﻦ ﺍﳌﻌﺪﻝ ﻭﺍﳊﺪ ﺍﻷﻗﺼﻰ ﻟﺪﺭﺟﺔ ﺍﻟﺘـﻮﺍﺯﻱ‬ ‫ﻛﻠﻤﺎ ﻛﺎﻥ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳊﺒﻮﰊ ﻟﻠﻤﻬﺎﻡ ﺃﺻﻐﺮ )ﺃﻧﻌﻢ(. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﺍﻟﺘﻘﺴﻴﻢ ﳌـﺴﺄﻟﺔ ﺿـﺮﺏ‬ ‫ﻣﺼﻔﻮﻓﺔ ﺑﺸﻌﺎﻉ ﺍﻟﻮﺍﺭﺩ ﰲ ﺍﻟﺸﻜﻞ)1-3( ﻟﻪ ﺗﻘﺴﻴﻢ ﺣﺒﻮﰊ ﺻﻐﲑ ﻭﺩﺭﺟﺔ ﺗﺰﺍﻣﻦ ﻋﺎﻟﻴـﺔ. ﺃﻣـﺎ‬ ‫ﺍﻟﺘﻘﺴﻴﻢ ﻟﻨﻔﺲ ﺍﳌﺴﺄﻟﺔ ﰲ ﺍﻟﺸﻜﻞ)4-3( ﻟﻪ ﺗﻘﺴﻴﻢ ﺣﺒﻮﰊ ﻛﺒﲑ ﻭﺩﺭﺟﺔ ﺗﺰﺍﻣﻦ ﻣﻨﺨﻔﻀﺔ.‬ ‫ﺗﻌﺘﻤﺪ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﺃﻳﻀﺎ ﻋﻠﻰ ﺷﻜﻞ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﻭﺍﻟﺘﻘﺴﻴﻢ ﺍﳊﺒﻮﰊ ﺫﺍﺗﻪ. ﻭﺑﺸﻜﻞ ﻋﺎﻡ،‬ ‫ﻟﻴﺲ ﻫﻨﺎﻙ ﺿﻤﺎﻥ ﻟﺘﻤﺎﺛﻠﻬﻤﺎ ﰲ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﻳﻌﺘﱪ ﺍﻟﺸﻜﻞ )5-3( ﲡﺮﻳـﺪﺍ‬ ‫ﳌﺨﻄﻄﻲ ﺍﻟﺘﺒﻌﻴﺔ ﰲ ﺍﻟﺸﻜﻠﲔ ))3-3( ﻭ )2-3(( ﻋﻠﻰ ﺍﻟﺘﻮﺍﱄ، ﻭﺍﻟﻌﺪﺩ ﺍﳌﻜﺘﻮﺏ ﺑـﺪﺍﺧﻞ ﻛـﻞ‬ ‫ﻋﻘﺪﺓ ﳝﺜﻞ ﻛﻤﻴﺔ ﺍﻟﻌﻤﻞ ﺍﳌﻄﻠﻮﺏ ﻹﻛﻤﺎﻝ ﺍﳌﻬﻤﺔ ﺍﻘﺎﺑﻠﺔ ﳍﺬﻩ ﺍﻟﻌﻘﺪﺓ.‬ ‫ﺇﻥ ﻣﻌﺪﻝ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﳌﺨﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﺍﳌﻮﺿـﺢ ﰲ ﺍﻟـﺸﻜﻞ )‪ (3-5.a‬ﻫـﻮ 33.2، ﻭﰲ‬ ‫ﺍﻟﺸﻜﻞ )‪(3-5.b‬ﻫﻮ 88.1، ﻣﻊ ﺃﻥ ﻛﻼ ﺍﳌﺨﻄﻄﺎﻥ ﻳﻌﺘﻤﺪﺍﻥ ﻧﻔﺲ ﺍﻟﺘﻘﺴﻴﻢ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫26‬ ‫اﻟﺸﻜﻞ)5-3(: ﺕﺠﺮﻳﺪ ﻟﻤﺨﻄﻄﻲ اﻟﺘﺒﻌﻴﺔ ﻟﻠﺸﻜﻠﻴﻦ )3-3( و)2-3(‬ ‫ﻣﻦ ﻣﺰﺍﻳﺎ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﺃﻧﻪ ﳛﺪﺩ ﻣﻌﺪﻝ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﻷﻱ ﺗﻘﺴﻴﻢ ﺣﺒﻮﰊ ﻣﻌﻄـﻰ ﻋـﻦ‬ ‫ﻃﺮﻳﻖ ﺍﳌﺴﺎﺭ ﺍﳊﺮﺝ، ﺳﻨﺸﲑ ﻟﻠﻌﻘﺪ ﺍﻟﱵ ﻟﻴﺲ ﳍﺎ ﺃﺿﻼﻉ ﺩﺍﺧﻠﺔ ﺇﻟﻴﻬﺎ ﺑـﻌﻘﺪ ﺍﻟﺒﺪﺍﻳﺔ، ﺃﻣﺎ ﺍﻟﻌﻘﺪ‬ ‫ﺍﻟﱵ ﻻ ﳜﺮﺝ ﻣﻨﻬﺎ ﺃﺿﻼﻉ ﻓﺴﻨﺸﲑ ﳍﺎ ﺑـﻌﻘﺪ ﺍﻟﻨﻬﺎﻳﺔ. ﻭﻋﻠﻰ ﻫﺬﺍ ﻓﺎﳌﺴﺎﺭ ﺍﳊﺮﺝ ﻫﻮ ﺃﻃـﻮﻝ‬ ‫ﺧﻂ ﻳﺼﻞ ﺑﲔ ﺃﻱ ﺯﻭﺟﲔ ﻣﻦ ﻋﻘﺪ ﺍﻟﻨﻬﺎﻳﺔ ﻭﺍﻟﺒﺪﺍﻳﺔ. ﻭﺃﻣﺎ ﺍﺠﻤﻟﻤﻮﻉ ﻟﻜﻤﻴﺔ ﺍﻟﻌﻤﻞ ﻟﻠﻌﻘﺪ ﺍﻟﻮﺍﻗﻌﺔ‬ ‫ﻋﻠﻰ ﺍﳌﺴﺎﺭ ﺍﳊﺮﺝ ﻳﻌﺮﻑ ﺑﻄﻮﻝ ﺍﳌﺴﺎﺭ ﺍﳊﺮﺝ، ﲝﻴﺚ ﺃﻥ ﻛﻤﻴﺔ ﺍﻟﻌﻘﺪﺓ ﻫﻲ ﻛﻤﻴﺔ ﺍﻟﻌﻤﻞ ﻟﻠﻤﻬﻤﺔ‬ ‫ﺍﳌﻄﺎﺑﻘﺔ ﳍﺬﻩ ﺍﻟﻌﻘﺪﺓ. ﺃﻣﺎ ﻧﺴﺒﺔ ﺇﲨﺎﱄ ﻛﻤﻴﺔ ﺍﻟﻌﻤﻞ ﻟﻠﻤﺴﺎﺭ ﺍﳊﺮﺝ ﻓُﻌـﺮﻑ ﲟﻌـﺪﻝ ﺩﺭﺟـﺔ‬ ‫ﺘَ‬ ‫ﺍﻟﺘﺰﺍﻣﻦ. ﻭﻟﺬﻟﻚ ﻓﺎﳌﺴﺎﺭ ﺍﳊﺮﺝ ﺍﻷﻗﺼﺮ ﻳﺆﺩﻱ ﺇﱃ ﺩﺭﺟﺔ ﺗﺰﺍﻣﻦ ﺃﻋﻠﻰ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﻃﻮﻝ‬ ‫ﺍﳌﺴﺎﺭ ﺍﳊﺮﺝ ﳌﺨﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ ﺍﳌﻮﺿﺢ ﰲ ﺍﻟﺸﻜﻞ )‪ (3-5.a‬ﻫـﻮ 72، ﺃﻣـﺎ ﻟﻠـﺸﻜﻞ‬ ‫ﻓﺎﻟﻄﻮﻝ ﻫﻮ 43، ﻭﻧﻈﺮﺍ ﻷﻥ ﳎﻤﻮﻉ ﻛﻤﻴﺔ ﺍﻟﻌﻤﻞ ﺍﻟﻼﺯﻣﺔ ﳊﻞ ﺍﳌﺴﺄﻟﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺃﺳﻠﻮﰊ ﺍﻟﺘﻘﺴﻴﻢ‬ ‫ﻫﻮ 36 ﻭ 46 ﻋﻠﻰ ﺍﻟﺘﻮﺍﱄ ﻓﺈﻥ ﻣﻌﺪﻝ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﳌﺨﻄﻄﻲ ﺍﻟﺘﺒﻌﻴﺔ ﻫـﻮ 33.2 ﻭ 88.1 ﻋﻠـﻰ‬ ‫ﺍﻟﺘﻮﺍﱄ.‬ ‫)‪(3-5.b‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫36‬ ‫ﻋﻠﻰ ﺍﻟﺮﻏﻢ ﻣﻦ ﺃﻧﻪ ﻗﺪ ﻳﻈﻬﺮ ﻟﻠﺒﻌﺾ ﺑﺄﻧﻪ ﳝﻜﻦ ﺇﻧﻘﺎﺹ ﺍﻟﺰﻣﻦ ﺍﻟﻼﺯﻡ ﳊﻞ ﺍﳌﺴﺄﻟﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ‬ ‫ﺗﻘﺴﻴﻢ ﺫﻭ ﺣﺒﻮﺑﻴﺔ ﻋﺎﻟﻴﺔ، ﺇﻻ ﺃﻥ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﻻ ﺗﺼﺎﺩﻑ ﻛﺜﲑﹰﺍ ﰲ ﺍﻷﺣﻮﺍﻝ ﺍﻟﻌﻤﻠﻴﺔ. ﻓﻔﻲ ﺍﻟﻌﺎﺩﺓ‬ ‫ﻳﻜﻮﻥ ﻫﻨﺎﻙ ﺣﺪ ﺃﻋﻠﻰ ﻟﻜﻤﻴﺔ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳊﺒﻮﰊ ﺍﻟﻨﺎﻋﻢ ﺍﻟﱵ ﺗﺴﻤﺢ ﻬﺑﺎ ﺍﳌﺴﺄﻟﺔ، ﻓﻤﺜﻼ ﻫﻨـﺎﻙ 2‪N‬‬ ‫ﻋﻤﻠﻴﺔ ﺿﺮﺏ ﻭﻣﺜﻠﻬﺎ ﻟﻠﺠﻤﻊ ﰲ ﻣﺴﺄﻟﺔ ﺿﺮﺏ ﻣﺼﻔﻮﻓﺔ ﺑﺸﻌﺎﻉ ﺍﻟﱵ ﻭﺭﺩﺕ ﰲ ﺍﳌﺜـﺎﻝ )1.3(‬ ‫ﻓﻬﺬﻩ ﺍﳌﺴﺄﻟﺔ ﻻ ﳝﻜﻦ ﺗﻘﺴﻴﻤﻬﺎ ﻷﻛﺜﺮ ﻣﻦ )2‪ O(N‬ﻣﻬﻤﺔ ﺣﱴ ﻭﻟﻮ ﺍﺳﺘﺨﺪﺍﻣﻨﺎ ﺃﻛﺜـﺮ ﺃﻧـﻮﺍﻉ‬ ‫ﺍﻟﺘﻘﺴﻴﻢ ﻧﻌﻮﻣﺔ ﰲ ﺍﻟﺘﺤﺒﻴﺐ.‬ ‫• ‪‬א‪ WETask-InteractionF‬ﻳﻌﺘﱪ ﻋﺎﻣﻞ ﻋﻤﻠﻲ ﺁﺧﺮ ﻫﺎﻡ ﻳﻘﻠﻞ ﻣﻦ ﻗﺪﺭﺗﻨﺎ‬ ‫‪‬‬ ‫‪‬‬ ‫ﻋﻠﻰ ﲢﻘﻴﻖ ﺍﻟﺘﺴﺮﻳﻊ ﻏﲑ ﺍﶈﺪﻭﺩ)ﻧﺴﺒﺔ ﺯﻣﻦ ﺍﻟﺘﻨﻔﻴﺬ ﺍﻟﺘﺴﻠﺴﻠﻲ ﺇﱃ ﺍﳌﺘﻮﺍﺯﻱ( ﻣﻦ ﺟﺮﺍﺀ ﺍﺳﺘﻌﻤﺎﻝ‬ ‫ﺍﻟﺘﻮﺍﺯﻱ. ﻫﺬﺍ ﺍﻟﻌﺎﻣﻞ ﻫﻮ ﺍﻟﺘﻔﺎﻋﻞ ﺑﲔ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﺗﻌﻤﻞ ﻋﻠﻰ ﻣﻌﺎﳉﺎﺕ ﳐﺘﻠﻔﺔ. ﻓﺎﳌﻬﺎﻡ ﺍﳌﻘـﺴﻤﺔ‬ ‫ﰲ ﺍﳌﺴﺄﻟﺔ ﺗﺘﺸﺎﺭﻙ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﻣﺪﺧﻼﺕ ﻭﳐﺮﺟﺎﺕ ﻭﺑﻴﺎﻧﺎﺕ ﻭﺳﻴﻄﺔ. ﻭﺍﻟﺘﺒﻌﻴﺔ ﰲ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴـﺔ‬ ‫ﺗﻨﺘﺞ ﻏﺎﻟﺒﺎ ﻣﻦ ﺣﻘﻴﻘﺔ ﳐﺮﺟﺎﺕ ﺇﺣﺪﻯ ﺍﳌﻬﺎﻡ ﻟﺘﻜﻮﻥ ﻣﺪﺧﻼﺕ ﳌﻬﺎﻡ ﺃﺧﺮﻯ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ‬ ‫ﹰ‬ ‫ﰲ ﻣﺜﺎﻝ ﺍﺳﺘﻌﻼﻡ ﻗﺎﻋﺪﺓ ﺍﻟﺒﻴﺎﻧﺎﺕ، ﺗﺘﺸﺎﺭﻙ ﺍﳌﻬﺎﻡ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﺑﺎﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﻮﺳﻴﻄﺔ؛ ﻓﺎﳉﺪﻭﻝ ﺍﳌﻨﺸﺄ‬ ‫ﺑﻮﺍﺳﻄﺔ ﺇﺣﺪﻯ ﺍﳌﻬﺎﻡ ﻳﺴﺘﺨﺪﻡ ﻣﻦ ﻗَﺒﻞ ﻣﻬﻤﺔ ﺃﺧﺮﻯ ﻛﻤﺪﺧﻼﺕ. ﻭﺍﻋﺘﻤﺎﺩﺍ ﻋﻠﻰ ﺍﻟﺘﻌﺮﻳـﻒ‬ ‫ِ‬ ‫ﻟﻠﻤﻬﺎﻡ ﻭ ﳕﻮﺫﺝ ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻓﻘﺪ ﳛﺼﻞ ﻫﻨﺎﻙ ﺗﻔﺎﻋﻞ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﺗﻈﻬﺮ ﻣـﺴﺘﻘﻠﺔ ﰲ‬ ‫ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ. ﻓﻤﺜﻼ.. ﰲ ﺍﻟﺘﻘﺴﻴﻢ ﳌﺴﺄﻟﺔ ﺿﺮﺏ ﻣﺼﻔﻮﻓﺔ ﺑﺸﻌﺎﻉ، ﻭﻋﻠﻰ ﺍﻟﺮﻏﻢ ﻣﻦ ﺃﻥ ﲨﻴـﻊ‬ ‫ﺍﳌﻬﺎﻡ ﻣﺴﺘﻘﻠﺔ ﻋﻦ ﺑﻌﻀﻬﺎ ﺍﻟﺒﻌﺾ، ﻓﺈﻥ ﲨﻴﻊ ﻫﺬﻩ ﺍﳌﻬﺎﻡ ﺗﺘﻄﻠﺐ ﺍﻟﻮﺻﻮﻝ ﺇﱃ ﻛﺎﻣﻞ ﺍﻟﺸﻌﺎﻉ ‪.b‬‬ ‫ﻭﻧﻈﺮﹰﺍ ﻷﻥ ﻫﻨﺎﻙ ﻧﺴﺨﺔ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﻟﺸﻌﺎﻉ ‪ ،b‬ﻓﻌﻠﻰ ﺍﳌﻬﺎﻡ ﺃﻥ ُﺗﺮﺳﻞ ﻭﺗﺴﺘﻘﺒﻞ ﺍﻟﺮﺳﺎﺋﻞ ﻣـﻦ‬ ‫ﺍﳉﻤﻴﻊ ﻟﻜﻲ ﺗﺼﻞ ﺇﱃ ﺍﻟﺸﻌﺎﻉ ‪ b‬ﰲ ﳕﻮﺫﺝ ﺍﻟﺬﺍﻛﺮﺓ ﺍﳌﺸﺘﺮﻛﺔ.‬ ‫2.3 ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻭﺍﳌﻘﺎﺑﻠﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫46‬ ‫ﺇﻥ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻫﻲ ﺃﺩﻭﺍﺕ ﺣﺴﺎﺑﻴﺔ ﻣﻨﻄﻘﻴﺔ)‪ (Logical‬ﺗﻘـﻮﻡ ﺑﺘﻨﻔﻴـﺬ ﺍﳌﻬـﺎﻡ. ﺃﻣـﺎ‬ ‫ﺍﳌﻌﺎﳉﺎﺕ ﻓﻬﻲ ﻭﺣﺪﺍﺕ ﻋﺘﺎﺩﻳﺔ )‪ (Hardware‬ﻭﺍﻟﱵ ﺗﻘﻮﻡ ﺑﺘﻨﻔﻴﺬ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﻓﻴﺰﻳﺎﺋﻴـﺎ.‬ ‫ﹰ‬ ‫ﻭﰲ ﺍﻟﻐﺎﻟﺐ ﻓﺈﻧﻨﺎ ﻋﻨﺪﻣﺎ ﻧﺬﻛﺮ ﻣﺼﻄﻠﺢ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻓﻬﻨﺎﻙ ﺗﻄﺎﺑﻖ ﺑﻴﻨﻪ ﻭﺑﲔ ﻣﺼﻄﻠﺢ ﺍﳌﻌﺎﳉﺎﺕ،‬ ‫ﻭﻋﻤﻮﻣﺎ ﻓﺈﻧﻪ ﳝﻜﻦ ﺗﻮﺿﻴﻊ ﺃﻛﺜﺮ ﻣﻦ ﺇﺟﺮﺍﺋﻴﺔ ﻋﻠﻰ ﺍﳌﻌﺎﰿ ﺍﻟﻮﺍﺣﺪ.‬ ‫ﹰ‬ ‫ﻟﻜﻲ ﳓﺼﻞ ﻋﻠﻰ ﺗﺴﺮﻳﻊ ﺃﻋﻠﻰ ﻣﻦ ﺍﻟﺘﻨﻔﻴﺬ ﺍﻟﺘﺴﻠﺴﻠﻲ ﳚﺐ ﻋﻠﻰ ﺍﻟﱪﻧـﺎﻣﺞ ﺍﳌﺘـﻮﺍﺯﻱ ﺃﻥ‬ ‫ﻳﻜﻮﻥ ﻟﺪﻳﻪ ﻋﺪﺓ ﺇﺟﺮﺍﺋﻴﺎﺕ ﺗﻌﺎﰿ ﻋﺪﺓ ﻣﻬﺎﻡ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ، ﻭﺗﺴﻤﻰ ﺍﻵﻟﻴﺔ ﺍﳌﺴﺘﺨﺪﻣﺔ ﻟﺘﻮﺯﻳـﻊ‬ ‫ﺍﳌﻬﺎﻡ ﻟﺘﻨﻔﻴﺬﻫﺎ ﻋﻠﻰ ﻋﺪﺓ ﺇﺟﺮﺍﺋﻴﺎﺕ ﺑﺎﳌﻘﺎﺑﻠﺔ ) ‪.(mapping‬‬ ‫ﻓﻤﺜﻼ: ﰲ ﺍﳌﺜﺎﻝ)4-3( ﰲ ﺿﺮﺏ ﺍﳌﺼﻔﻮﻓﺎﺕ ﳝﻜﻦ ﺃﻥ ﲣﺼﺺ ﺃﺭﺑﻊ ﺇﺟﺮﺍﺋﻴـﺎﺕ ﳌﻬﻤـﺔ‬ ‫ﹰ‬ ‫ﺣﺴﺎﺏ ﻣﺼﻔﻮﻓﺔ ﺟﺰﺋﻴﻪ ﻣﻦ ‪ ) c‬ﺳﻴﺬﻛﺮ ﺍﳌﺜﺎﻝ ﻻﺣﻘﺎ(.‬ ‫ﹰ‬ ‫ﻳﻠﻌﺐ ﳐﻄﻄﻲ ﺍﻟﺘﺒﻌﻴﺔ ﻭ ﺍﻟﺘﻔﺎﻋﻞ ﺑﲔ ﺍﳌﻬـﺎﻡ ﺩﻭﺭﹰﺍ ﻫﺎﻣـﺎ ﰲ ﺍﺧﺘﻴـﺎﺭ ﺍﳌﻘﺎﺑﻠـﺔ ﺍﳉﻴـﺪﺓ‬ ‫ﹰ‬ ‫ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ. ﻭﺍﳌﻘﺎﺑﻠﺔ ﺍﳉﻴﺪﺓ ﳚﺐ ﺃﻥ ﺗﺴﻌﻰ ﺇﱃ ﺯﻳﺎﺩﺓ ﺍﺳﺘﻌﻤﺎﻝ ﺍﻟﺘـﻮﺍﺯﻱ ﻭﺫﻟـﻚ‬ ‫ﲟﻘﺎﺑﻠﺔ ﺍﳌﻬﺎﻡ ﺍﳌﺴﺘﻘﻠﺔ ﻋﻠﻰ ﺇﺟﺮﺍﺋﻴﺎﺕ ﳐﺘﻠﻔﺔ، ﺃﻳﻀﺎ ﻻﺑﺪ ﺃﻥ ﺗﺴﻌﻰ ﺇﱃ ﺗﻘﻠﻴﺺ ﺍﻟـﺰﻣﻦ ﺍﻟﻜﻠـﻲ‬ ‫ﻭﺫﻟﻚ ﺑﻀﻤﺎﻥ ﺃﻥ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺟﺎﻫﺰﺓ ﻟﺘﻨﻔﻴﺬ ﺍﳌﻬﺎﻡ ﻋﻠﻰ ﺍﳌﺴﺎﺭ ﺍﳊﺮﺝ ﺣﺎﳌﺎ ﺗﻜﻮﻥ ﺍﳌﻬﺎﻡ ﻗﺎﺑﻠـﺔ‬ ‫ﻟﻠﺘﻨﻔﻴﺬ، ﻭﳚﺐ ﺃﻳﻀﺎ ﰲ ﺍﳌﻘﺎﺑﻠﺔ ﺍﳉﻴﺪﺓ ﺃﻥ ﺗﺴﻌﻰ ﺇﱃ ﺍﻹﻗﻼﻝ ﻣﻦ ﺍﻟﺘﻔﺎﻋﻼﺕ ﺑﲔ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ‬ ‫ﹰ‬ ‫ﻭﺫﻟﻚ ﲟﻘﺎﺑﻠﺔ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﳍﺎ ﺩﺭﺟﺔ ﻋﺎﻟﻴﺔ ﻣﻦ ﺍﻟﺘﻔﺎﻋﻞ ﺍﳌﺸﺘﺮﻙ ﺳﻮﻳﺎ ﻋﻠﻰ ﻧﻔﺲ ﺍﻹﺟﺮﺍﺋﻴﺔ.‬ ‫ﹰ‬ ‫ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﰲ ﺍﻟﺸﻜﻞ)6-3( ﻋﺮﺽ ﳌﻘﺎﺑﻠﺔ ﻓﻌﺎﻟﺔ ﳌﺨﻄﻂ ﺍﻟﺘﻘﺴﻴﻢ ﻭ ﺍﻟﺘﻔﺎﻋﻞ ﺑـﲔ‬ ‫ﺍﳌﻬﺎﻡ ﺍﻟﻮﺍﺭﺩﺓ ﰲ ﺍﻟﺸﻜﻞ )5-3( ﺇﱃ ﺃﺭﺑﻊ ﺇﺟﺮﺍﺋﻴﺎﺕ، ﻭﻳﻼﺣﻆ ﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﺃﻥ ﺍﳊﺪ ﺍﻷﻋﻠـﻰ‬ ‫ﻷﺭﺑﻊ ﺇﺟﺮﺍﺋﻴﺎﺕ ﳝﻜﻦ ﺃﻥ ﻳﺴﺘﺨﺪﻡ ﺑﺸﻜﻞ ﻣﻔﻴﺪ ﻋﻠﻰ ﺍﻟﺮﻏﻢ ﻣﻦ ﺃﻥ ﻋﺪﺩ ﺍﳌﻬﺎﻡ ﻫﻮ ﺳﺒﻊ ﻣﻬﺎﻡ،‬ ‫ﻭﻫﺬﺍ ﻳﻌﻮﺩ ﺇﱃ ﺃﻥ ﺩﺭﺟﺔ ﺍﻟﺘﺰﺍﻣﻦ ﺍﻟﻘﺼﻮﻯ ﻫﻲ ﺃﺭﺑﻌﺔ ﻓﻘﻂ. ﺃﻣﺎ ﺍﳌﻬﺎﻡ ﺍﻟﺜﻼﺙ ﺍﻷﺧﲑﺓ ﻓﻴﻤﻜﻦ ﺃﻥ‬ ‫ﻳﺘﻢ ﻣﻘﺎﺑﻠﺘﻬﺎ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫56‬ ‫ﻭﻟﻜﻦ ﻣﻦ ﺍﻷﻓﻀﻞ ﻣﻘﺎﺑﻠﺔ ﺍﳌﻬﺎﻡ ﺍﻟﱵ ﺗﺮﺗﺒﻂ ﺑﻀﻠﻊ ﻋﻠﻰ ﻧﻔﺲ ﺍﻹﺟﺮﺍﺋﻴﺔ ﻷﻥ ﺫﻟﻚ ﳝﻨـﻊ ﺃﻥ‬ ‫ﳛﺪﺙ ﺗﻔﺎﻋﻞ ﺑﲔ ﺍﳌﻬﺎﻡ ﺑﺴﺒﺐ ﺣﺪﻭﺙ ﺗﻔﺎﻋﻞ ﺑﲔ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ. ﻓﻤﺜﻼ: ﰲ ﺍﻟﺸﻜﻞ )‪ (3-6.b‬ﺇﺫﺍ‬ ‫ﹰ‬ ‫ﻗﻤﻨﺎ ﲟﻘﺎﺑﻠﺔ ﺍﳌﻬﻤﺔ 5 ﻣﻊ 2‪ P‬ﻓﺈﻥ ﺫﻟﻚ ﻳﺘﻄﻠﺐ ﻣﻦ ﺍﻹﺟﺮﺍﺋﻴﺘﺎﻥ 0‪ P‬ﻭ 1‪ P‬ﺃﻥ ﺗﺘﻔﺎﻋﻼ ﻣﻊ ﺍﻹﺟﺮﺍﺋﻴﺔ‬ ‫2‪ .P‬ﻭﰲ ﺍﳌﻘﺎﺑﻠﺔ ﺍﳊﺎﻟﻴﺔ ﻓﺈﻧﻪ ﻳﻮﺟﺪ ﺗﻔﺎﻋﻞ ﻭﺣﻴﺪ ﺑﲔ ﺍﻹﺟﺮﺍﺋﻴﺘﺎﻥ 0‪ P‬ﻭ 1‪.P‬‬ ‫اﻟﺸﻜﻞ)6-3( اﻟﻤﻘﺎﺏﻠﺔ ﻟﻤﺨﻄﻂ اﻟﻤﻬﺎم ﻓﻲ ﺷﻜﻞ)5-3( إﻟﻰ أرﺏﻌﺔ إﺟﺮاﺋﻴﺎت ‪.P‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫66‬ ‫3.3 ﺗﻘﻨﻴﺎﺕ ﺍﻟﺘﻘﺴﻴﻢ‬ ‫ﻛﻤﺎ ﹸﻛﺮ ﻗﺒﻞ ﺫﻟﻚ ﻓﺈﻥ ﺇﺣﺪﻯ ﺍﳋﻄﻮﺍﺕ ﺍﻷﺳﺎﺳﻴﺔ ﺍﻟﱵ ﳓﺘﺎﺟﻬﺎ ﻣﻦ ﺃﺟﻞ ﺣﻞ ﺍﳌـﺴﺎﺋﻞ‬ ‫ﺫ‬ ‫ﺑﺎﻟﺘﻮﺍﺯﻱ ﻫﻲ ﺗﻘﺴﻴﻢ )‪ (Decomposition‬ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﻟﺘﺄﺩﻳﺘﻬﺎ ﻋﻠﻰ ﳎﻤﻮﻋﺔ ﻣﻬﺎﻡ ﺑﺸﻜﻞ‬ ‫ﻣﺘﺰﺍﻣﻦ ﻛﻤﺎ ﻫﻮ ﳏﺪﺩ ﺑﻮﺍﺳﻄﺔ ﳐﻄﻂ ﺍﻟﺘﺒﻌﻴﺔ. ﻭﺳﻨﻘﻮﻡ ﰲ ﻫﺬﻩ ﺍﻟﻔﻘﺮﺓ ﺑﺪﺭﺍﺳﺔ ﺑﻌـﺾ ﻃـﺮﻕ‬ ‫ﺍﻟﺘﻘﺴﻴﻤﺎﺕ ﺍﻟﺸﺎﺋﻌﺔ ﻣﻦ ﺃﺟﻞ ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﺍﻟﺘﺰﺍﻣﻦ. ﻣﻊ ﺍﻟﻌﻠﻢ ﺑﺄﻥ ﻫﺬﻩ ﺍﻷﺳﺎﻟﻴﺐ ﻟﻴﺴﺖ ﺷﺎﻣﻠﺔ‬ ‫ﻟﻜﻞ ﺗﻘﻨﻴﺎﺕ ﺍﻟﺘﻘﺴﻴﻢ، ﻭﻛﺬﻟﻚ ﻓﺈﻥ ﺃﻱ ﺃﺳﻠﻮﺏ ﺗﻘﺴﻴﻢ ﻣﺬﻛﻮﺭ ﻟﻴﺲ ﻫﻨﺎﻙ ﺿﻤﺎﻥ ﺩﺍﺋﻢ ﺑﺄﻧـﻪ‬ ‫ﺳﻴﻌﻄﻲ ﺃﻓﻀﻞ ﺧﻮﺍﺭﺯﻣﻴﺔ ﻣﺘﻮﺍﺯﻳﺔ. ﻭﻋﻠﻰ ﺍﻟﺮﻏﻢ ﻣﻦ ﻭﺟﻮﺩ ﺑﻌﺾ ﺟﻮﺍﻧﺐ ﺍﻟﻘـﺼﻮﺭ ، ﻓـﺈﻥ‬ ‫ﺗﻘﻨﻴﺎﺕ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﺬﻛﻮﺭﺓ ﰲ ﻫﺬﺍ ﺍﻟﻔﺼﻞ ﻏﺎﻟﺒﺎ ﻣﺎ ﺗﻜﻮﻥ ﻧﻘﻄﺔ ﺑﺪﺍﻳﺔ ﺟﻴﺪﺓ ﻟﻠﻌﺪﻳﺪ ﻣﻦ ﺍﳌﺴﺎﺋﻞ،‬ ‫ﻭﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺗﻘﺴﻴﻤﺎﺕ ﻓﻌﺎﻟﺔ ﻟﻌﺪﺩ ﻛﺒﲑ ﻣﻦ ﺍﳌﺴﺎﺋﻞ ﻓﺈﻧﻪ ﳝﻜﻦ ﺍﺳﺘﺨﺪﺍﻡ ﺃﺳﻠﻮﺏ ﺗﻘـﺴﻴﻢ‬ ‫ﻭﺍﺣﺪ ﺃﻭ ﺍﳌﺰﺝ ﺑﲔ ﺃﻛﺜﺮ ﻣﻦ ﺃﺳﻠﻮﺏ.‬ ‫ﰎ ﺗﺼﻨﻴﻒ ﺗﻘﻨﻴﺎﺕ ﺍﻟﺘﻘﺴﻴﻢ ﻫﺬﻩ ﺇﱃ:‬ ‫• ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ )‪.(Recursive Decomposition‬‬ ‫َ ِْ‬ ‫• ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ )‪.(Data Decomposition‬‬ ‫• ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ )‪.(Exploratory Decomposition‬‬ ‫• ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ )‪.(Speculative Decomposition‬‬ ‫ﺇﻥ ﺃﺳﻠﻮﰊ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌَﻮﺩﻱ ﻭﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻛﻼﳘﺎ ﺫﻭ ﻏﺮﺽ ﻋﺎﻡ؛ ﻭﺫﻟﻚ ﻷﻧﻪ ﳝﻜـﻦ‬ ‫ِ‬ ‫ﺍﺳﺘﺨﺪﺍﻣﻬﻤﺎ ﰲ ﺣﻞ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳌﺴﺎﺋﻞ. ﺃﻣﺎ ﺃﺳﻠﻮﰊ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﻭ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻓﻜﻼﳘﺎ‬ ‫ﻳﺴﺘﺨﺪﻡ ﻷﻏﺮﺍﺽ ﺃﻛﺜﺮ ﺧﺼﻮﺻﻴﺔ، ﻭﳝﻜﻦ ﺗﻄﺒﻴﻘﻬﻤﺎ ﻋﻠﻰ ﺃﻧﻮﺍﻉ ﻣﻌﻴﻨﺔ ﻣﻦ ﺍﳌﺴﺎﺋﻞ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫76‬ ‫1.3.3 @‪ðč ìfl Ûa@áîÔnÛa‬‬ ‫‪†È‬‬ ‫)‪(Recursive Decomposition‬‬ ‫ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ ﻫﻮ ﻭﺳﻴﻠﺔ ﻟﺘﺤﻘﻴﻖ ﺍﻟﺘﺰﺍﻣﻦ ﰲ ﺍﳌﺴﺎﺋﻞ ﺍﻟﱵ ﳝﻜـﻦ ﺣﻠـﻬﺎ ﺑﺎﺳـﺘﺨﺪﺍﻡ‬ ‫َِﱡ‬ ‫ﺍﺳﺘﺮﺍﺗﻴﺠﻴﺔ "ﻓ ّﻕ-َﺗ ُﺪ")‪ ،(divide-and-conquer‬ﻭﰱ ﻫﺬﺍ ﺍﻷﺳﻠﻮﺏ ﻓﺈﻥ ﺍﳌـﺴﺄﻟﺔ ُﺤـﻞ‬ ‫ﺗ‬ ‫ﺮﺴ‬ ‫ﺑﺘﻘﺴﻴﻤﻬﺎ ﺃﻭﻻ ﺇﱃ ﳎﻤﻮﻋﺔ ﻣﺴﺎﺋﻞ ﻓﺮﻋﻴﺔ ﻣﺴﺘﻘﻠﺔ، ﻭﻛﻞ ﻭﺍﺣﺪﺓ ﻣﻦ ﻫﺬﻩ ﺍﳌﺴﺎﺋﻞ ﺍﻟﻔﺮﻋﻴﺔ ُﺗﺤﻞ‬ ‫ﺃﻳﻀﺎ ﺑﺘﻜﺮﺍﺭ ﺗﻘﺴﻴﻤﻬﺎ ﺇﱃ ﻣﺴﺎﺋﻞ ﻓﺮﻋﻴﺔ ﰒ ﺗُﺒﻊ ﺑﻨﺘﺎﺋﺠﻬﺎ ﳎﺘﻤﻌﺔ. ﻭﺍﺳﺘﺮﺍﺗﻴﺠﻴﺔ "ﻓ ﱢﻕ-ﺗـﺴﺪ”‬ ‫ﹶﺮ َ ُ‬ ‫ﺘ‬ ‫ﹰ‬ ‫ﺗﺆﺩﻱ ﺇﱃ ﺗﺰﺍﻣﻦ ﻃﺒﻴﻌﻲ ﻭﺫﻟﻚ ﻷﻧﻪ ﳝﻜﻦ ﺣﻞ ﺍﳌﺴﺎﺋﻞ ﺍﻟﻔﺮﻋﻴﺔ ﺍﳌﺨﺘﻠﻔﺔ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ.‬ ‫ﻣﺜﺎل)3-3( : اﻟﻔﺮز‬ ‫اﻟﺴﺮﻳﻊ )‪(Quicksort‬‬ ‫ﺑﻔﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﻓﺮﺯ)ﺗﺮﺗﻴﺐ( ﺍﻟﺴﻠﺴﻠﺔ ‪ A‬ﻭﺍﳌﻜﻮﻧﺔ ﻣﻦ ‪ n‬ﻋﻨﺼﺮ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺧﻮﺍﺭﺯﻣﻴـﺔ‬ ‫ﺍﻟﻔﺮﺯ ﺍﻟﺴﺮﻳﻊ ﺷﺎﺋﻌﺔ ﺍﻻﺳﺘﺨﺪﺍﻡ ﻭﺍﻟﱵ ﺗﻌﺪ ﺧﻮﺍﺭﺯﻣﻴﺔ ﻣﻦ ﺍﻟﻨﻤﻂ )ﻓﺮﻕ ﺗَـ ُﺪ(. ﺗﺒـﺪﺃ ﻫـﺬﻩ‬ ‫ﺴ‬ ‫ﹶﱢ‬ ‫ﹰ‬ ‫ﱡ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺑﺎﺧﺘﻴﺎﺭ ﻋﻨﺼﺮ ﳏﻮﺭﻱ ‪ ،X‬ﰒ ﺑﻌﺪ ﺫﻟﻚ ﻳﺘﻢ ﺗﻘـﺴﻴﻢ ﺍﻟﺴﻠـﺴﻠﺔ ‪ A‬ﺇﱃ ﺳﻠـﺴﻠﺘﲔ‬ ‫ﻓﺮﻋﻴﺘﲔ 0‪ A‬ﻭ 1‪ A‬ﲝﻴﺚ ﺗﻜﻮﻥ ﲨﻴﻊ ﻋﻨﺎﺻﺮ 0‪ A‬ﺃﺻﻐﺮ ﻣﻦ ‪ X‬ﻭﻛﻞ ﻋﻨﺎﺻﺮ 1‪ A‬ﺃﻛﱪ ﻣﻦ ﺃﻭ‬ ‫ﺗﺴﺎﻭﻯ ‪ .X‬ﺗﺸﻜﻞ ﺧﻄﻮﺓ ﺍﻟﺘﺠﺰﻱﺀ ﻫﺬﻩ ﺧﻄﻮﺓ ﺍﻟﺘﻘﺴﻴﻢ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ. ﻭﻛﻞ ﻣﻦ ﺍﻟﺴﻠﺴﻠﺘﲔ‬ ‫ﺍﻟﻔﺮﻋﻴﺘﲔ 0‪ A‬ﻭ1‪ A‬ﻳﺘﻢ ﻓﺮﺯﳘﺎ ﺑﻮﺍﺳﻄﺔ ﺍﻻﺳﺘﺪﻋﺎﺀ ﺍﻟﻌَـﻮﺩﻱ ﳋﻮﺍﺭﺯﻣﻴـﺔ ﺍﻟﻔـﺮﺯ ﺍﻟـﺴﺮﻳﻊ‬ ‫ِ‬ ‫‪ .Quicksort‬ﻭﻛﻞ ﺍﺳﺘﺪﻋﺎﺀ ﻣﻦ ﻫﺬﻩ ﺍﻻﺳﺘﺪﻋﺎﺀﺍﺕ ﺍﻟﻌَﻮﺩﱠﻳﺔ ﻳﺆﺩﻱ ﺇﱃ ﺗﻘﺴﻴﻢ ﺇﺿﺎﰲ ﻟﻠﺴﻼﺳﻞ.‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ )7-3( ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﻣﻊ ﻓﺮﺯ 21 ﻋﺪﺩ. ﻭﻳﻼﺣﻆ ﺃﻥ ﺍﻻﺳـﺘﺪﻋﺎﺀ ﺍﻟﻌـﻮﺩﻱ ﻻ‬ ‫ﻳﺘﻮﻗﻒ ﺇﻻ ﻋﻨﺪﻣﺎ ﲢﺘﻮﻱ ﻛﻞ ﺳﻠﺴﻠﺔ ﻓﺮﻋﻴﺔ ﻋﻠﻰ ﻋﻨﺼﺮ ﻭﺣﻴﺪ ﻓﻘﻂ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫86‬ ‫اﻟﺸﻜﻞ)7-3(: ﻣﺨﻄﻂ اﻟﺘﺒﻌﻴﺔ ﻟﻠﻔﺮز اﻟﺴﺮﻳﻊ واﻟﻘﺎﺋﻢ ﻋﻠﻰ اﻟﺘﻘﺴﻴﻢ اﻟﻌﻮدي ﻟﺘﻘﺴﻴﻢ ﻣﺘﺴﻠﺴﻠﺔ ﻣﻦ ٢١ رﻗﻢ.‬ ‫ﰲ ﺍﻟﺸﻜﻞ )7-3( ﻗﺪ ﻋﺮﻓﻨﺎ ﺍﳌﻬﻤﺔ ﺑﺄﻬﻧﺎ ﺍﻟﻘﻴﺎﻡ ﺑﺘﻘﺴﻴﻢ ﺳﻠﺴﻠﺔ ﻓﺮﻋﻴﺔ ﻣﻌﻄﺎﺓ. ﻭﻋﻠﻰ ﻫـﺬﺍ‬ ‫ﻓﺎﻥ ﺍﻟﺸﻜﻞ)7-3( ﻳﻮﺿﺢ ﺃﻳﻀﺎ ﳐﻄﻂ ﺍﳌﻬﻤﺔ ﺍﳋﺎﺹ ﺑﺎﳌﺴﺄﻟﺔ. ﻓﻔﻲ ﺍﻟﺒﺪﺍﻳـﺔ ﻫﻨـﺎﻙ ﺳﻠـﺴﻠﺔ‬ ‫ﻭﺍﺣﺪﺓ) ﺟﺬﺭ ﺍﻟﺸﺠﺮﺓ(. ﻭﳝﻜﻨﻨﺎ ﺃﻥ ﻧﺴﺘﺨﺪﻡ ﻋﻤﻠﻴﺔ ﻭﺍﺣﺪﺓ ﻟﺘﻘﺴﻴﻤﻬﺎ، ﻭﻋﻨﺪ ﺍﻛﺘﻤﺎﻝ ﻣﻬﻤـﺔ‬ ‫ﺍﳉﺬﺭ ﻓﺈﻧﻪ ﻳﻨﺘﺞ ﻋﻨﻬﺎ ﺍﺛﻨﺘﲔ ﻣﻦ ﺍﻟﺴﻼﺳﻞ ﺍﻟﻔﺮﻋﻴﺔ )0‪ A‬ﻭ1‪ A‬ﻣﺘﻮﺍﻓﻘﺘﲔ ﻣـﻊ ﺍﻟﻌﻘـﺪﺗﲔ ﰲ‬ ‫ﺍﳌﺴﺘﻮﻯ ﺍﻷﻭﻝ ﻣﻦ ﺍﻟﺸﺠﺮﺓ( ﻭﻛﻞ ﻣﻨﻬﻤﺎ ﳝﻜﻦ ﺃﻥ ُﻘﺴﻢ ﺑﺎﻟﺘﻮﺍﺯﻱ، ﻭﺑﻨﻔﺲ ﺍﻟﻄﺮﻳﻘﺔ ﻳـﺴﺘﻤﺮ‬ ‫ﻳ‬ ‫ﺍﻟﺘﺰﺍﻣﻦ ﰲ ﺍﻟﺰﻳﺎﺩﺓ ﻛﻠﻤﺎ ﻧﺰﻟﻨﺎ ﺇﱃ ﺃﺳﻔﻞ ﺍﻟﺸﺠﺮﺓ.‬ ‫ﰲ ﺑﻌﺾ ﺍﻷﺣﻴﺎﻥ ﳝﻜﻦ ﺍﻟﻘﻴﺎﻡ ﺑﺈﻋﺎﺩﺓ ﻫﻴﻜﻠﺔ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﻭﺫﻟﻚ ﳉﻌﻠﻬﺎ ﻗﺎﺑﻠﺔ ﻟﻠﺘﻘﺴﻴﻢ‬ ‫ﺍﻟﻌﻮﺩﻱ ﺣﱴ ﻟﻮ ﻛﺎﻧﺖ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﳌﺴﺘﺨﺪﻣﺔ ﻟﻠﻤﺴﺄﻟﺔ ﻟﻴﺴﺖ ﻣﻦ ﺍﻟﻨﻤﻂ )ﻓ ﱢﻕ-َﺗ ُﺪ(. ﻓﻌﻠـﻰ‬ ‫ﹶﺮ ﺴ‬ ‫ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﺑﻔﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﳚﺎﺩ ﺍﻟﻌﻨﺼﺮ ﺍﻷﺻﻐﺮ ﰲ ﺳﻠﺴﻠﺔ ﻏﲑ ﻣﺮﺗﺒـﺔ ‪ A‬ﻣﻜﻮﻧـﺔ ﻣـﻦ ‪n‬‬ ‫ﻋﻨﺼﺮ. ﺗﻘﻮﻡ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﳊﻞ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺑﺎﻟﺘﺪﻗﻴﻖ ﰲ ﻛﻞ ﺍﻟﺴﻠﺴﻠﺔ ‪ ،A‬ﻭﰲ ﻛـﻞ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫96‬ ‫ﺧﻄﻮﺓ ﺗﻘﻮﻡ ﺑﺘﺴﺠﻴﻞ ﺃﺻﻐﺮ ﻋﻨﺼﺮ ﻣﻮﺟﻮﺩ ﺣﱴ ﺍﻵﻥ، ﻛﻤﺎ ﻫﻮ ﻣﻮﺿﺢ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴـﺔ 1-3.‬ ‫ﻭﻣﻦ ﺍﻟﺴﻬﻞ ﺍﺩﺍﺭﻙ ﺃﻥ ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻟﻴﺴﺖ ﺗﺰﺍﻣﻨﻴﺔ.‬ ‫اﻟﺨﻮارزﻣﻴﺔ1-3: ﺏﺮﻥﺎﻣﺞ ﺕﺴﻠﺴﻠﻲ ﻹﻳﺠﺎد اﻟﻌﺪد اﻷﺹﻐﺮ ﻓﻲ ﻣﺼﻔﻮﻓﺔ أﻋﺪاد ‪ A‬ﺏﻄﻮل ‪. n‬‬ )‪procedure SERIAL_MIN (A, n‬‬ ‫‪begin‬‬ ‫;]0[‪min = A‬‬ ‫‪for i := 1 to n - 1 do‬‬ ‫;]‪if (A[i] < min) min := A[i‬‬ ‫;‪endfor‬‬ ‫;‪return min‬‬ ‫‪end SERIAL_MIN‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫.7‬ ‫.8‬ ‫ﺣﲔ ُﻌﻴﺪ ﻫﻴﻜﻠﺔ ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻟﻜﻲ ﳒﻌﻠﻬﺎ ﻣﻦ ﺍﻟﻨﻤﻂ "ﻓ ّﻕ-َﺗ ُﺪ"، ﻓﺈﻧﻪ ﳝﻜـﻦ ﻟﻨـﺎ‬ ‫ﹶﺮ ﺴ‬ ‫ﻧ‬ ‫ﺍﺳﺘﺨﺪﺍﻡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ ﻛﻲ ﳒﻌﻞ ﻣﻨﻬﺎ ﺧﻮﺍﺭﺯﻣﻴﺔ ﻣﺘﺰﺍﻣﻨﺔ.‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺔ 2-3 ﻫﻲ ﻣﻦ ﺍﻟﻨﻤﻂ "ﻓ ّﻕ-َﺗ ُﺪ" ﻭﻫﻲ ﻣﻦ ﺃﺟﻞ ﺇﳚﺎﺩ ﺍﻟﻌﻨـﺼﺮ ﺍﻷﺻـﻐﺮ ﰲ‬ ‫ﹶﺮ ﺴ‬ ‫ﻣﺼﻔﻮﻓﺔ، ﻭﰱ ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻧﻘﻮﻡ ﺑﺘﻘﺴﻴﻢ ﺍﻟﺴﻠﺴﻠﺔ ‪ A‬ﺇﱃ ﺳﻠﺴﻠﺘﲔ ﻓﺮﻋﻴﺘﲔ ﳍﻤـﺎ ﺍﳊﺠـﻢ‬ ‫)2/‪ ،(n‬ﻭ ﻣﻦ ﰒ ﻧﻘﻮﻡ ﺑﺈﳚﺎﺩ ﺍﻟﻌﻨﺼﺮ ﺍﻷﺻﻐﺮ ﻟﻜﻞ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﻟﺴﻠﺴﻠﺘﲔ ﻭﺫﻟـﻚ ﺑﺎﺳـﺘﺨﺪﺍﻡ‬ ‫ﺍﻻﺳﺘﺪﻋﺎﺀ ﺍﻟﻌﻮﺩﻱ. ﻭﺍﻟﻌﻨﺼﺮ ﺍﻷﺻﻐﺮ ﺍﻟﻜﻠﻰ ﻳﻮﺟﺪ ﺑﺎﻧﺘﻘﺎﺀ ﺃﺻﻐﺮ ﻋﻨﺼﺮ ﰲ ﻫﺬﻳﻦ ﺍﻟﺴﻠـﺴﻠﺘﲔ.‬ ‫ﻳﺘﻮﻗﻒ ﺍﻻﺳﺘﺪﻋﺎﺀ ﺍﻟﻌﻮﺩﻱ ﻓﻘﻂ ﻋﻨﺪﻣﺎ ﻳﺘﺒﻘﻰ ﻋﻨﺼﺮ ﻭﺍﺣﺪ ﰲ ﻛﻞ ﺳﻠﺴﻠﺔ. ﻭ ﺍﻵﻥ ﻭ ﺑﻌـﺪ ﺃﻥ‬ ‫ﺃﻋﺪﻧﺎ ﻫﻴﻜﻠﺔ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻬﺑﺬﺍ ﺍﻷﺳﻠﻮﺏ ﻓﺈﻧﻪ ﻳﻜﻮﻥ ﻣﻦ ﺍﻟﺴﻬﻞ ﺭﺳﻢ ﺍﳌﺨﻄﻂ ﺍﳌﻌﺘﻤﺪ‬ ‫ﹸِ‬ ‫ﻋﻠﻰ ﺍﳌﻬﻤﺔ ﳍﺬﻩ ﺍﳌﺴﺄﻟﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫07‬ ‫اﻟﺸﻜﻞ)8-3(:‬ ‫ﻣﺨﻄﻂ اﻟﺘﺒﻌﻴﺔ ﻹﻳﺠﺎد اﻟﻌﺪد اﻷﺹﻐﺮ ﻟﻠﺴﻠﺴﻠﺔ ] 21,2,11,8,7,1,9,4 [. آﻞ ﻋﻘﺪة ﻓﻲ اﻟﺸﺠﺮة ﺕﻤﺜﻞ ﻣﻬﻤﺔ‬ ‫ﻹﻳﺠﺎد اﻟﻌﺪد اﻷﺹﻐﺮ ﻣﻦ ﻋﺪدﻳﻦ.‬ ‫اﻟﺨﻮارزﻣﻴﺔ 2-3 :ﺏﺮﻥﺎﻣﺞ ﻋﻮدي ﻹﻳﺠﺎد اﻟﻌﺪد اﻷﺹﻐﺮ ﻣﻦ ﺏﻴﻦ ﻋﻨﺎﺹﺮ ‪ A‬اﻟﻤﻜﻮﻥﺔ ﻣﻦ ‪ n‬ﻋﺪدا .‬ ‫ً‬ ‫)‪procedure RECURSIVE_MIN (A, n‬‬ ‫‪begin‬‬ ‫‪if (n = 1) then‬‬ ‫;]0[‪min := A‬‬ ‫‪else‬‬ ‫;)2/‪lmin := RECURSIVE_MIN (A, n‬‬ ‫;)2/‪rmin := RECURSIVE_MIN (&(A[n/2]), n - n‬‬ ‫‪if (lmin < rmin) then‬‬ ‫;‪min := lmin‬‬ ‫‪else‬‬ ‫;‪min := rmin‬‬ ‫;‪endelse‬‬ ‫;‪endelse‬‬ ‫;‪return min‬‬ ‫‪end RECURSIVE_MIN‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫.7‬ ‫.8‬ ‫.9‬ ‫.01‬ ‫.11‬ ‫.21‬ ‫.31‬ ‫.41‬ ‫.51‬ ‫2.3.3 ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ )‪(Data Decomposition‬‬ ‫ﻳﻌﺘﱪ ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺃﺳﻠﻮﺑﺎ ﻓﻌﺎﻻ ﻭﺷـﺎﺋﻌﺎ ﻳـﺴﺘﺨﺪﻡ ﻟﻠﺤـﺼﻮﻝ ﻋﻠـﻰ ﺍﻟﺘـﺰﺍﻣﻦ ﰲ‬ ‫ﹰ‬ ‫ﹰّﹰ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﻟﱵ ﺗﻌﻤﻞ ﻋﻠﻰ ﺗﺮﺍﻛﻴﺐ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﻀﺨﻤﺔ. ﻭﰲ ﻫـﺬﺍ ﺍﻷﺳـﻠﻮﺏ، ﻓﺎﻟﺘﻘـﺴﻴﻢ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫17‬ ‫ﻟﻠﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﻳﺘﻢ ﰲ ﺧﻄﻮﺗﲔ: ﺍﳋﻄﻮﺓ ﺍﻷﻭﱃ ﻳﺘﻢ ﻓﻴﻬﺎ ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﱵ ﲡﺮﻱ ﻋﻠﻴﻬﺎ‬ ‫ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ، ﻭﰲ ﺍﳋﻄﻮﺓ ﺍﻟﺜﻧﻴﺔ ﻳﺘﻢ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﺠﻤﻟﺰﺋﺔ ﻹﺣﺪﺍﺙ ﺗﻘﺴﻴﻢ ﻟﻠﻌﻤﻠﻴـﺔ‬ ‫ﺍﳊﺴﺎﺑﻴﺔ ﺇﱃ ﻣﻬﺎﻡ. ﻭﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﱵ ﺗﺘﻢ ﻬﺑﺎ ﻫﺬﻩ ﺍﳌﻬﺎﻡ ﻋﻠﻰ ﺃﺟﺰﺍﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺘﻠﻔﺔ ﻋﺎﺩﺓ ﻣـﺎ‬ ‫ﺗﻜﻮﻥ ﻣﺘﺸﺎﻬﺑﺔ، ﺃﻭ ﻳﺘﻢ ﺍﺧﺘﻴﺎﺭﻫﺎ ﻣﻦ ﺑﲔ ﳎﻤﻮﻋﺔ ﺻﻐﲑﺓ ﻣﻦ ﺍﻟﻌﻤﻠﻴﺎﺕ.‬ ‫ﻭﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺑﻌﺪﺓ ﻃﺮﻕ ﳐﺘﻠﻔﺔ ﻛﻤﺎ ﺳﻴﺄﰐ ﺗﻔﺼﻴﻠﻪ ﺑﻌﺪ ﻗﻠﻴﻞ. ﻭﺑـﺸﻜﻞ‬ ‫ﻋﺎﻡ ﻳﻨﺒﻐﻲ ﺍﻻﺳﺘﻄﻼﻉ ﻭﺗﻘﻴﻴﻢ ﻛﻞ ﺍﻟﻄﺮﻕ ﺍﳌﻤﻜﻨﺔ ﻟﺘﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻭﻣﻦ ﰒ ﲢﺪﻳﺪ ﺃﻳﻬﺎ ﺗﻌﻄـﻲ‬ ‫ﺗﻘﺴﻴﻢ ﺣﺴﺎﰊ ﻃﺒﻴﻌﻲ ﻭﻣﻨﺎﺳﺐ.‬ ‫• ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ‬ ‫ﳝﻜﻦ ﰲ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳊﺴﺎﺑﺎﺕ ﺃﻥ ﻳﺘﻢ ﺣﺴﺎﺏ ﻛﻞ ﻋﻨﺼﺮ ﻣﻦ ﺍﳌﺨﺮﺟﺎﺕ ﻋﻠـﻰ ﺣـﺪﺓ‬ ‫ﻛﻮﻇﻴﻔﺔ ﻟﻠﻤﺪﺧﻼﺕ. ﻭﰲ ﻣﺜﻞ ﻫﺬﻩ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﻓﺈﻥ ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ ﻳﺘـﺴﺒﺐ‬ ‫ﰲ ﺣﺪﻭﺙ ﺍﻟﺘﻘﺴﻴﻢ ﺁﻟﻴﺎ ﻟﻠﻤﺴﺄﻟﺔ ﺇﱃ ﻣﻬﺎﻡ، ﲝﻴﺚ ﻳﺘﻢ ﺇﺳﻨﺎﺩ ﻋﻤﻞ ﺣﺴﺎﺏ ﳉﺰﺀ ﻣﻦ ﺍﳌﺨﺮﺟﺎﺕ‬ ‫ﹰ‬ ‫ﺇﱃ ﻛﻞ ﻣﻬﻤﺔ. ﻭﰲ ﺍﳌﺜﺎﻝ)4-3( ﺳﻨﻌﺮﺽ ﻟﻌﻤﻠﻴﺔ ﺿﺮﺏ ﻣﺼﻔﻮﻓﺔ ﻹﻳﻀﺎﺡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﻌﺘﻤﺪ ﻋﻠﻰ‬ ‫ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ.‬ ‫اﻟﻤﺜﺎل)4-3( ﺿﺮب اﻟﻤﺼﻔﻮﻓﺎت اﻟﻤﺮﺏﻌﺔ‬ ‫ﺑﻔﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺇﺟﺮﺍﺀ ﻋﻤﻠﻴﺔ ﺍﻟﻀﺮﺏ ﻋﻠﻰ ﺍﳌﺼﻔﻮﻓﺘﲔ)‪ A‬ﻭ ‪ (B‬ﻭﻛﻼﳘﺎ ﻣﻦ ﺍﳊﺠـﻢ‬ ‫‪ n×n‬ﻭﺳﻨﻘﻮﻡ ﺑﻮﺿﻊ ﺍﻟﻨﺎﺗﺞ ﰲ ﺍﳌﺼﻔﻮﻓﺔ ‪ .C‬ﰲ ﺍﻟﺸﻜﻞ)9-3( ﺗﻮﺿﻴﺢ ﻟﺘﻘﺴﻴﻢ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺇﱃ‬ ‫ﺃﺭﺑﻊ ﻣﻬﺎﻡ. ﺣﻴﺚ ﰎ ﺍﻋﺘﺒﺎﺭ ﺃﻥ ﻛﻞ ﻣﺼﻔﻮﻓﺔ ﻣﺮﻛﺒﺔ ﻣﻦ ﺃﺭﺑﻊ ﻛﺘﻞ )ﺃﻭ ﻣﺼﻔﻮﻓﺎﺕ ﺟﺰﺋﻴﺔ( ﲢﺪﺩ‬ ‫ﻫﺬﻩ ﺍﻟﻜﺘﻞ ﺑﻮﺍﺳﻄﺔ ﺗﻘﺴﻴﻢ ﻛﻞ ﺑﻌﺪ ﻣﻦ ﺍﳌﺼﻔﻮﻓﺔ ﺇﱃ ﻧﺼﻔﲔ )ﻭﺑﺬﻟﻚ ﺳﻴﻨﺘﺞ ﻟﺪﻳﻨﺎ ﺃﺭﺑﻊ ﻛﺘﻞ‬ ‫ﺩﺍﺧﻞ ﺍﳌﺼﻔﻮﻓﺔ(. ﻭﺍﳌﺼﻔﻮﻓﺎﺕ ﺍﳉﺰﺋﻴﺔ ﺍﻷﺭﺑﻊ ﻟﻠﻤﺼﻔﻮﻓﺔ ‪ ) C‬ﻛﻠﻬﺎ ﺗﻘﺮﻳﺒﺎ ﻣﻦ ﺍﳊﺠـﻢ × 2/‪n‬‬ ‫2/‪ (n‬ﻳﺘﻢ ﺣﺴﺎﻬﺑﺎ ﻣﺴﺘﻘﻠﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺃﺭﺑﻊ ﻣﻬﺎﻡ ﻛﻤﺠﻤـﻮﻉ ﳊﻮﺍﺻـﻞ ﺍﻟـﻀﺮﺏ ﺍﳌﻮﺍﻓـﻖ‬ ‫ﻟﻠﻤﺼﻔﻮﻓﺎﺕ ﺍﳉﺰﺋﻴﺔ ﺍﳌﻮﺟﻮﺩﺓ ﰲ ‪ A‬ﻭ ‪.B‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫27‬ ‫اﻟﺸﻜﻞ )9-3(: )‪ (a‬اﻟﺘﺠﺰيء ﻟﻤﺼﻔﻮﻓـﺎت اﻟـﻤﺪﺥﻼت واﻟﻤﺨﺮﺟﺎت إﻟﻰ ﻣﺼﻔـﻮﻓـﺎت ﺟﺰﺋﻴﺔ ﺏﺤﺠﻢ ٢×٢.‬ ‫)‪ (b‬اﻟﺘﻘﺴﻴﻢ ﻟﻤﺴﺄﻟﺔ ﺿﺮب اﻟﻤﺼﻔﻮﻓﺎت إﻟﻰ أرﺏﻊ ﻣﻬﺎم إﻋﺘﻤﺎدا ﻋﻠﻰ ﺕﺠﺰيء اﻟﻤﺼﻔﻮﻓﺎت اﻟﻮارد ﻓﻲ )‪.(a‬‬ ‫ً‬ ‫ﺇﻥ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﻮﺿﺢ ﰲ ﺍﻟﺸﻜﻞ )9-3( ﻗﺎﺋﻢ ﻋﻠﻰ ﲡﺰﻱﺀ ﻣﺼﻔﻮﻓﺔ ﺍﳋـﺮﺝ ‪ C‬ﺇﱃ ﺃﺭﺑـﻊ‬ ‫ﻣﺼﻔﻮﻓﺎﺕ ﺟﺰﺋﻴﺔ ﻭﻛﻞ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﳌﻬﺎﻡ ﺍﻷﺭﺑﻊ ﺗﻘﻮﻡ ﲝﺴﺎﺏ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﳌﺼﻔﻮﻓﺎﺕ ﺍﳉﺰﺋﻴﺔ.‬ ‫ﻭﳚﺐ ﻣﻼﺣﻈﺔ ﺃﻥ ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﳜﺘﻠﻒ ﻋﻦ ﺗﻘﺴﻴﻢ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺇﱃ ﻣﻬﺎﻡ. ﻭﺑﺎﻟﺮﻏﻢ ﻣﻦ‬ ‫ﺃﻥ ﻛﻼﳘﺎ ﻣﺘﺼﻞ ﺑﺎﻵﺧﺮ ﻭﺃﻥ ﺍﻷﻭﻝ ﰲ ﺍﻟﻐﺎﻟﺐ ﻣﺴﺎﻋﺪ ﻟﻠﺜﺎﱐ، ﻓﺈﻥ ﺗﻘﺴﻴﻤﺎ ﻣﻌﻄﻰ ﻟﻠﺒﻴﺎﻧﺎﺕ ﻻ‬ ‫ﹰ‬ ‫ﻳﻨﺘﺞ ﻋﻨﻪ ﺗﻘﺴﻴﻤﺎ ﻓﺮﻳﺪﹰﺍ ﺇﱃ ﻣﻬﺎﻡ. ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﺍﻟﺸﻜﻞ )01-3( ﻳﻮﺿﺢ ﺗﻘﺴﻴﻤﲔ ﺁﺧـﺮﻳﻦ‬ ‫ﹰ‬ ‫ﻟﻀﺮﺏ ﺍﳌﺼﻔﻮﻓﺎﺕ، ﻛﻞ ﻭﺍﺣﺪ ﺇﱃ ﲦﺎﱐ ﻣﻬﺎﻡ، ﻭﻫﺬﺍﻥ ﺍﻟﺘﻘﺴﻴﻤﺎﻥ ﳑـﺎﺛﻼﻥ ﻟـﻨﻔﺲ ﺗﻘـﺴﻴﻢ‬ ‫ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﻮﺟﻮﺩ ﰲ ﺍﻟﺸﻜﻞ ﺍﻟﺴﺎﺑﻖ)‪.(3-9.a‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫37‬ ‫اﻟﺸﻜﻞ )01-3(: ﻣﺜﺎﻻن ﻟﺘﻘﺴﻴﻢ ﻋﻤﻠﻴﺔ ﺿﺮب اﻟﻤﺼﻔﻮﻓﺔ إﻟﻰ ﺙﻤﺎﻥﻴﺔ ﻣﻬﺎم.‬ ‫ﺳﻨﻌﻄﻲ ﻣﺜﺎﻻ ﺁﺧﺮ ﻟﺘﻮﺿﻴﺢ ﻓﻜﺮﺓ ﺍﻟﺘﻘﺴﻴﻤﺎﺕ ﺍﳌﻌﺘﻤﺪﺓ ﻋﻠﻰ ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ، ﻭﺳـﻨﻌﺮﺽ‬ ‫ﹰ‬ ‫ﻓﻴﻪ ﻣﺴﺄﻟﺔ ﺣﺴﺎﺏ ﺍﻟﺘﻜﺮﺍﺭ ﺿﻤﻦ ﳎﻤﻮﻋﺔ ﻣﻦ ﻋﺪﺓ ﻋﻨﺎﺻﺮ ﲢﺪﺙ ﺳﻮﻳﺎ) ‪ ( itemset‬ﰲ ﻗﺎﻋـﺪﺓ‬ ‫ﹰ‬ ‫ﺑﻴﺎﻧﺎﺕ ﺇﺟﺮﺍﺋﻴﺔ، ﻭﺍﻟﱵ ﳝﻜﻦ ﺗﻘﺴﻴﻤﻬﺎ ﺑﺎﻻﻋﺘﻤﺎﺩ ﻋﻠﻰ ﺍﻟﺘﺠﺰﻱﺀ ﻟﻠﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ.‬ ‫اﻟﻤﺜﺎل)5-3(: ﺡﺴﺎب ﺕﻜﺮارات اﻟﻤﻜﻮﻥﺎت ﻓﻲ اﻟﺘﻌﺎﻣﻞ ﻣﻊ ﻗﺎﻋﺪة اﻟﺒﻴﺎﻥﺎت‬ ‫ﺑﺎﻋﺘﺒﺎﺭ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺣﺴﺎﺏ ﺍﻟﺘﻜﺮﺍﺭ ﺠﻤﻟﻤﻮﻋﺔ ﻣﻦ ﺍﻟﻌﻨﺎﺻﺮ ﺍﻟﱵ ﲢﺪﺙ ﺳﻮﻳﺎ)‪ (itemsets‬ﰲ‬ ‫ﹰ‬ ‫ﻗﺎﻋﺪﺓ ﺑﻴﺎﻧﺎﺕ ﺗﻔﺎﻋﻠﻴﺔ ﺃﻭ ﺇﺟﺮﺍﺋﻴﺔ ‪ .Transation Database‬ﻟﺪﻳﻨﺎ ﰲ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺍﺠﻤﻟﻤﻮﻋﺘﺎﻥ‬ ‫‪ T‬ﻭ ‪ I‬ﲝﻴﺚ ﺃﻥ ﺍﺠﻤﻟﻤﻮﻋﺔ ‪ T‬ﲢﺘﻮﻱ ﻋﻠﻰ ‪ n‬ﺇﺟﺮﺍﺋﻴﺔ)‪ (Transation‬ﺃﻣﺎ ﺍﺠﻤﻟﻤﻮﻋﺔ ‪ I‬ﻓﺘﺤﺘـﻮﻱ‬ ‫ﻋﻠﻰ ‪ m‬ﻋﺪﺩ ﻣﻦ ‪ .itemset‬ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﻭ ﻛﻞ ﳎﻤﻮﻋﺔ ﻋﻨﺎﺻﺮ)‪ (itemset‬ﲢﺘﻮﻱ ﻋﻠﻰ ﻋﺪﺩ‬ ‫ﻗﻠﻴﻞ ﻣﻦ ﺍﻟﻌﻨﺎﺻﺮ ﻣﻦ ﺑﲔ ﳎﻤﻮﻋﺔ ﳑﻜﻨﺔ ﻣﻦ ﺍﻟﻌﻨﺎﺻﺮ. ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﻗﺪ ﺗﻜﻮﻥ ‪ T‬ﻗﺎﻋـﺪﺓ‬ ‫ﺑﻴﺎﻧﺎﺕ ﻣﺘﺠﺮ ﳌﺒﻴﻌﺎﺕ ﺍﻟﺰﺑﺎﺋﻦ. ﻓﺈﺫﺍ ﺭﻏﺐ ﺍﳌﺴﺘﻮﺩﻉ ﺃﻥ ﻳﻌﺮﻑ ﻋﺪﺩ ﺍﻟﺰﺑﺎﺋﻦ ﺍﻟﺬﻳﻦ ﺍﺷﺘﺮﻭﺍ ﻛـﻞ‬ ‫ﳎﻤﻮﻋﺔ ﺍﻟﻌﻨﺎﺻﺮ ﺍﶈﺪﺩﺓ، ﻓﺈﻧﻪ ﳚﺐ ﺃﻥ ﳛﺴﺐ ﻋﺪﺩ ﺍﳌﺮﺍﺕ ﺍﻟﱵ ﻇﻬﺮﺕ ﻓﻴﻬﺎ ﻋﻨﺎﺻﺮ ‪ I‬ﰲ ﲨﻴﻊ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺎﺕ، ﲟﻌﲎ ﺃﻧﻪ ﻋﺪﺩ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﻟﱵ ﳎﻤﻮﻋﺔ ﺍﻟﻌﻨﺎﺻﺮ ﺟﺰﺀ ﻣﻨﻬﺎ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫47‬ ‫ﺍﻟﺸﻜﻞ )‪ (3-11.a‬ﻳﻮﺿﺢ ﻣﺜﺎﻝ ﳍﺬﺍ ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﺴﺎﺑﺎﺕ. ﻭﻗﺎﻋﺪﺓ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﻮﺿﺤﺔ ﰲ‬ ‫ﺍﻟﺸﻜﻞ)11-3( ﺗﺘﺄﻟﻒ ﻣﻦ ٠١ ﺇﺟﺮﺍﺋﻴﺎﺕ، ﻭﳓﻦ ﻧﺮﻏﺐ ﲝﺴﺎﺏ ﺍﻟﺘﻜﺮﺍﺭ ﻟﺜﻤﺎﻧﻴـﺔ ﳎﻤﻮﻋـﺎﺕ‬ ‫ﻋﻨﺎﺻﺮ ‪ Itemset‬ﺍﳌﻮﺿﺤﺔ ﰲ ﺍﻟﻌﻤﻮﺩ ﺍﻟﺜﺎﱐ. ﺍﻟﺘﻜﺮﺍﺭﺍﺕ ﺍﻟﻔﻌﻠﻴﺔ ﳍﺬﻩ ﺍﺠﻤﻟﻤﻮﻋـﺎﺕ ﰲ ﻗﺎﻋـﺪﺓ‬ ‫ﺍﻟﺒﻴﺎﻧﺎﺕ ﻣﻌﺮﻭﺿﺔ ﰲ ﺍﻟﻌﻤﻮﺩ ﺍﻟﺜﺎﻟﺚ. ﻭﻟﻠﺘﻤﺜﻴﻞ: ﳎﻤﻮﻋﺔ ﺍﻟﻌﻨﺎﺻﺮ }‪ {D,E‬ﺗﻈﻬـﺮ ﺛـﻼﺙ‬ ‫ﻣﺮﺍﺕ، ﻣﺮﺓ ﰲ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﻟﺜﺎﻧﻴﺔ، ﻭﻣﺮﺓ ﺃﺧﺮﻯ ﰲ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﻟﺮﺍﺑﻌﺔ ﻭﻣـﺮﺓ ﺛﺎﻟﺜـﺔ ﻳﻈﻬـﺮ ﰲ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﻟﺘﺎﺳﻌﺔ.‬ ‫اﻟﺸﻜﻞ)11-3(: ﺡﺴﺎب ﺕﻜﺮار اﻟﻤﻜﻮﻥﺎت ﻓﻲ ﺕﻌﺎﻣﻼت ﻗﺎﻋﺪة اﻟﺒﻴﺎﻥﺎت‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫57‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ )‪ (3-11.b‬ﻛﻴﻒ ﺃﻥ ﻋﻤﻠﻴﺔ ﺣﺴﺎﺏ ﺍﻟﺘﻜﺮﺍﺭ ﺠﻤﻟﻤﻮﻋﺔ ﺍﻟﻌﻨﺎﺻﺮ ﳝﻜﻦ ﺃﻥ ﺗﻘـﺴﻢ‬ ‫ﺇﱃ ﻣﻬﻤﺘﲔ ﻭﺫﻟﻚ ﺑﺘﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ ﺇﱃ ﺟﺰﺃﻳﻦ ﻭﻣﻦ ﰒ ﲢﺴﺐ ﻛـﻞ ﻣﻬﻤـﺔ ﺍﳉـﺰﺀ‬ ‫ﺍﳋﺎﺹ ﻬﺑﺎ ﻣﻦ ﺍﻟﺘﻜﺮﺍﺭﺍﺕ. ﻻﺣﻆ ﺃﻥ ﻣﺪﺧﻼﺕ ﳎﻤﻮﻋﺔ ﺍﻟﻌﻨﺎﺻﺮ ﺃﻳﻀﺎ ﰎ ﺗﻘـﺴﻴﻤﻬﺎ، ﻭﻟﻜـﻦ‬ ‫ﺍﻟﺪﺍﻋﻲ ﻟﻠﺘﻘﺴﻴﻢ ﰲ ﺍﻟﺸﻜﻞ )‪ (3-11.b‬ﻫﻮ ﺃﻥ ﺗﻘﻮﻡ ﻛﻞ ﻣﻬﻤﺔ ﲝﺴﺎﺏ ﺟﺰﺀ ﻣﻦ ﺍﻟﺘﻜـﺮﺍﺭﺍﺕ‬ ‫ﺍﻟﺬﻱ ﺃﺳﻨﺪ ﺇﻟﻴﻬﺎ ﺑﺸﻜﻞ ﻣﺴﺘﻘﻞ.‬ ‫• ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻠﺔ‬ ‫ﺇﻥ ﺍﻟﺘﺠﺰﻱﺀ ﻟﻠﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﻓﻘﻂ ﺇﺫﺍ ﻛﺎﻥ ﻛﻞ ﳐﺮﺝ ﳝﻜﻦ ﺃﻥ ﳛـﺴﺐ‬ ‫ﻃﺒﻴﻌﻴﺎ ﻛﻮﻇﻴﻔﺔ ﻟﻠﺒﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻠﺔ. ﻭﰲ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﻟﻴﺲ ﻣـﻦ ﺍﳌﻤﻜـﻦ ﺍﻟﻘﻴـﺎﻡ‬ ‫ﹰ‬ ‫ﺑﺘﺠﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ. ﻓﻤﺜﻼ: ﻋﻨﺪ ﳏﺎﻭﻟﺔ ﺇﳚﺎﺩ ﺍﻟﻌﺪﺩ ﺍﻷﺻﻐﺮ ﺃﻭ ﺍﻷﻛﱪ ﺠﻤﻟﻤﻮﻋـﺔ ﻣـﻦ‬ ‫ﹰ‬ ‫ﺍﻷﻋﺪﺍﺩ، ﻓﺈﻥ ﺍﳌﺨﺮﺟﺎﺕ ﻫﻲ ﻗﻴﻤﺔ ﻓﺮﺩﻳﺔ ﻏﲑ ﻣﻌﻠﻮﻣﺔ. ﻭﰲ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﻓﺎﻟﻌﻨﺎﺻﺮ ﺍﻟﻔﺮﺩﻳـﺔ‬ ‫ﻣﻦ ﺍﳌﺨﺮﺟﺎﺕ ﻻ ﳝﻜﻦ ﻣﻌﺎﻣﻠﺘﻬﺎ ﻭﻫﻲ ﻣﻨﻌﺰﻟﺔ. ﻭﰲ ﻣﺜﻞ ﻫﺬﻩ ﺍﳊﺎﻻﺕ ﻗﺪ ﻳﻜﻮﻥ ﻣﻦ ﺍﳌﻤﻜـﻦ‬ ‫ﺍﻟﻘﻴﺎﻡ ﺑﺘﺠﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻠﺔ ﻭﻣﻦ ﰒ ﺍﺳﺘﺨﺪﺍﻡ ﻫﺬﻩ ﺍﻷﻗﺴﺎﻡ ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺍﻟﺘﺰﺍﻣﻦ. ﻳـﺘﻢ‬ ‫ﺇﻧﺸﺎﺀ ﻣﻬﻤﺔ ﻟﻜﻞ ﺟﺰﺀ ﻣﻦ ﺑﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻼﺕ، ﻭﺗﻨﻔﺬ ﻫﺬﻩ ﺍﳌﻬﻤﺔ ﻗﺪﺭ ﺍﳌـﺴﺘﻄﺎﻉ ﺑﺎﺳـﺘﺨﺪﺍﻡ‬ ‫ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﶈﻠﻴﺔ. ﻻﺣﻆ ﺃﻥ ﺍﳊﻞ ﻟﻠﻤﻬﺎﻡ ﰲ ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻠﺔ ﺭﲟﺎ ﻻ ﻳـﺆﺩﻱ ﺇﱃ ﺍﳊـﻞ‬ ‫ﺍﻟﻨﻬﺎﺋﻲ ﻣﺒﺎﺷﺮﺓ، ﻭﰲ ﻣﺜﻞ ﻫﺬﻩ ﺍﳊﺎﻻﺕ ﻳﻠﺰﻡ ﺇﺟﺮﺍﺀ ﺣﺴﺎﺑﺎﺕ ﺇﺿﺎﻓﻴﺔ ﻟﺘﺠﻤﻴﻊ ﺍﻟﻨﻮﺍﺗﺞ ﺍﳉﺰﺋﻴﺔ.‬ ‫ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﻋﻨﺪ ﳏﺎﻭﻟﺔ ﺇﳚﺎﺩ ﳎﻤﻮﻉ ﺳﻠﺴﻠﺔ ﻣﻜﻮﻧﺔ ﻣﻦ ‪ N‬ﻋﺪﺩ ﺑﺎﺳﺘﺨﺪﺍﻡ ‪ P‬ﺇﺟﺮﺍﺋﻴـﺔ‬ ‫)‪ ،(N>P‬ﻓﺈﻧﻪ ﳝﻜﻦ ﲡﺰﺀ ﺍﳌﺪﺧﻼﺕ ﺇﱃ ‪ P‬ﺳﻠﺴﻠﺔ ﻓﺮﻋﻴﺔ ﺑﺄﺣﺠﺎﻡ ﻣﺘﺴﺎﻭﻳﺔ. ﺑﻌﺪ ﺫﻟﻚ ﺗﻘﻮﻡ‬ ‫ﻛﻞ ﻣﻬﻤﺔ ﲝﺴﺎﺏ ﺍﺠﻤﻟﻤﻮﻉ ﻟﻮﺍﺣﺪﺓ ﻣﻦ ﺍﻟﺴﻼﺳﻞ ﺍﻟﻔﺮﻋﻴﺔ. ﻭﰲ ﺍﻟﻨﻬﺎﻳﺔ ﳝﻜﻦ ﺃﻥ ﳓﺼﻞ ﻋﻠـﻰ‬ ‫ﺍﻟﻨﺎﺗﺞ ﺍﻟﻨﻬﺎﺋﻲ ﻭﺫﻟﻚ ﺑﺘﺠﻤﻴﻊ ﻧﻮﺍﺗﺞ ﺍﻟـ‪ P‬ﺳﻠﺴﻠﺔ ﻓﺮﻋﻴﺔ.‬ ‫ﰲ ﻣﺴﺄﻟﺔ ﺣﺴﺎﺏ ﺍﻟﺘﻜﺮﺍﺭ ﺠﻤﻟﻤﻮﻋﺔ ﻣﻦ ﺍﻟﻌﻨﺎﺻﺮ ﰲ ﻗﺎﻋﺪﺓ ﺑﻴﺎﻧـﺎﺕ ﺇﺟﺮﺍﺋﻴـﺔ ﺍﳌﻮﺿـﺤﺔ ﰲ‬ ‫ﺍﳌﺜﺎﻝ)5-3( ﳝﻜﻦ ﺃﻳﻀﺎ ﺃﻥ ﻳﺘﻢ ﺗﻘﺴﻴﻤﻬﺎ ﺑﺎﻻﻋﺘﻤﺎﺩ ﻋﻠﻰ ﺍﻟﺘﺠﺰﻱﺀ ﻟﻠﻤﺪﺧﻼﺕ. ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫67‬ ‫)‪ (3-12.a‬ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻠﺔ. ﺗﻘﻮﻡ ﻛﻞ ﻣﻬﻤﺔ ﻣـﻦ ﺍﳌﻬﻤـﺘﲔ ﲝـﺴﺎﺏ ﺍﻟﺘﻜـﺮﺍﺭﺍﺕ‬ ‫ﻟﻠﻤﺠﻤﻮﻋﺔ ﺍﻟﻔﺮﻋﻴﺔ ﺍﳋﺎﺻﺔ ﻬﺑﺎ. ﺍﺠﻤﻟﻤﻮﻋﺘﺎﻥ ﺍﻟﻨﺎﲡﺘﺎﻥ ﻋﻦ ﺍﳌﻬﻤﺘﲔ ﲤﺜﻼﻥ ﻧﻮﺍﺗﺞ ﻭﺳﻴﻄﺔ ﻭﺑـﻀﻢ‬ ‫ﻫﺬﻩ ﺍﻟﻨﻮﺍﺗﺞ ﺳﻮﻳﺎ ﺳﻴﻨﺘﺞ ﻟﺪﻳﻨﺎ ﺍﻟﻨﺎﺗﺞ ﺍﻟﻨﻬﺎﺋﻲ.‬ ‫ﹰ‬ ‫اﻟﺸﻜﻞ)21-3(: ﺏﻌﺾ اﻟﺘﻘﺴﻴﻤﺎت ﻟﺤﺴﺎب ﺕﻜﺮار اﻟﻤﻜﻮﻥﺎت ﻓﻲ ﺕﻌﺎﻣﻼت ﻗﺎﻋﺪة اﻟﺒﻴﺎﻥﺎت‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫77‬ ‫• ﲡﺰﻱﺀ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺨﺮﺟﺔ ﻭﺍﳌﺪﺧﻠﺔ ﻣﻌﺎ‬ ‫ﹰ‬ ‫ﻗﺪ ﻳﻜﻮﻥ ﳑﻜﻨﺎ ﰲ ﺑﻌﺾ ﺍﳊﺎﻻﺕ ﺍﻟﱵ ﺗﻘﺒﻞ ﲡﺰﻱﺀ ﺍﳌﺨﺮﺟﺎﺕ ﺃﻥ ﻳﺘﻢ ﲡﺰﻱﺀ ﺍﳌﺪﺧﻼﺕ‬ ‫ﺃﻳﻀﺎ ﳑﺎ ﻳﺆﺩﻱ ﺇﱃ ﺗﺰﺍﻣﻦ ﺃﻛﺜﺮ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﰲ ﺍﻟـﺸﻜﻞ)‪ (3-12.b‬ﻣـﺴﺄﻟﺔ ﺣـﺴﺎﺏ‬ ‫ﺍﻟﺘﻜﺮﺍﺭ ﰲ ﻗﺎﻋﺪﺓ ﺑﻴﺎﻧﺎﺕ. ﻭﻳﻈﻬﺮ ﻓﻴـﻪ ﺃﻥ ‪ Transaction‬ﻣﻘـﺴﻢ ﺇﱃ ﺟـﺰﺃﻳﻦ ﻭﻛـﺬﻟﻚ‬ ‫‪ frequencies‬ﻗﺪ ﰎ ﺗﻘﺴﻴﻤﻬﺎ ﺇﱃ ﺟﺰﺃﻳﻦ، ﻭﻳﺴﻨﺪ ﻟﻜﻞ ﻣﻬﻤﺔ ﺃﺣﺪ ﺍﻻﺣﺘﻤﺎﻻﺕ ﺍﻷﺭﺑﻊ. ﻭﺑﻌﺪ‬ ‫ﺫﻟﻚ ﺗﻘﻮﻡ ﻛﻞ ﻣﻬﻤﺔ ﲝﺴﺎﺏ ﺍﳉﺰﺀ ﺍﳋﺎﺹ ﻬﺑﺎ ﻣﻦ ﺍﻟﺘﻜﺮﺍﺭﺍﺕ، ﻭﰲ ﺍﻟﻨﻬﺎﻳﺔ ﻳﺘﻢ ﲨﻊ ﳐﺮﺟـﺎﺕ‬ ‫ﺍﳌﻬﻤﺔ 1 ﻣﻊ ﳐﺮﺟﺎﺕ ﺍﳌﻬﻤﺔ 3، ﻭﻳﺘﻢ ﺃﻳﻀﺎ ﲨﻊ ﺍﳌﻬﻤﺔ 2 ﻣﻊ ﺍﳌﻬﻤﺔ 4.‬ ‫ﹰ‬ ‫3.3.3 @‪@ @(Exploratory Decomposition)@Àb“Ønüa@áîÔnÛa‬‬ ‫ﻳﺴﺘﺨﺪﻡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻟﻐﺮﺽ ﺗﻘﺴﻴﻢ ﺍﳌﺴﺎﺋﻞ ﺍﻟﱵ ﺗﺘﻮﺍﻓﻖ ﺣﺴﺎﺑﺎﻬﺗﺎ ﺿـﻤﻨﻴﺎ ﻣـﻊ‬ ‫ﺍﻟﺒﺤﺚ ﻋﻦ ﻓﻀﺎﺀ ﻟﻠﺤﻠﻮﻝ. ﻭﰱ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻧﻘﻮﻡ ﺑﺘﻘﺴﻴﻢ ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ ﺇﱃ ﺃﺟﺰﺍﺀ‬ ‫ﺻﻐﲑﺓ، ﻭﻳﺘﻢ ﺍﻟﺒﺤﺚ ﰲ ﻛﻞ ﺍﻷﺟﺰﺍﺀ ﺑﺸﻜﻞ ﻣﺘﺰﺍﻣﻦ ﺇﱃ ﺃﻥ ﻳﺘﻢ ﺇﳚﺎﺩ ﺍﳊﻞ ﺍﳌﻄﻠﻮﺏ. ﻭﺗﻌﺘـﱪ‬ ‫ﻣﺴﺄﻟﺔ "ﹸﺣﺠﻴﺔ ﺍﳌﺮﺑﻊ" )‪(15-Puzzle‬ﻛﻤﺜﺎﻝ ﻟﻠﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ.‬ ‫ﺃ‬ ‫اﻟﻤﺜﺎل )6-3(: ﻣﺴﺄﻟﺔ أﺡﺠﻴﺔ اﻟﻤﺮﺏﻊ‬ ‫ﺗﺘﻜﻮﻥ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﻣﻦ 51 ﺑﻼﻃﺔ ﻣﺮﻗﻤﺔ ﻣﻦ 1 ﺇﱃ 51 ﺑﺎﻹﺿﺎﻓ ﺇﱃ ﻣﺴﺎﺣﺔ ﻓﺎﺭﻏﺔ ﺗﻜﻔﻲ‬ ‫ﻟﺒﻼﻃﺔ ﻭﺍﺣﺪﺓ، ﻭﻗﺪ ﰎ ﻭﺿﻊ ﻫﺬﻩ ﺍﻟﺒﻼﻁ ﻋﻠﻰ ﺷﻜﻞ ﺷﺒﻜﺔ ﻣﺮﺑﻌـﺔ ﺃﺑﻌﺎﺩﻫـﺎ 4×4. ﻭﳝﻜـﻦ‬ ‫ﲢﺮﻳﻚ ﺑﻼﻃﺔ ﻧﺎﺣﻴﺔ ﺍﳌﻮﺿﻊ ﺍﻟﻔﺎﺭﻍ ﻣﻦ ﺍﳌﻮﺿﻊ ﺍﺠﻤﻟﺎﻭﺭ، ﻭﻬﺑﺬﺍ ﺳﻴﻨﺘﺞ ﻓﺮﺍﻍ ﰲ ﺍﳌﻜﺎﻥ ﺍﻷﺻـﻠﻲ‬ ‫ﻟﻠﺒﻼﻃﺔ ﺍﻟﱵ ﺣﺮﻛﺖ. ﻭﺑﻨﺎﺀﺍ ﻋﻠﻰ ﺗﺮﺗﻴﺐ ﺍﻟﺸﺒﻜﺔ ﻓﺈﻧﻪ ﳝﻜﻦ ﺃﻥ ﻳﻜﻮﻥ ﻫﻨﺎﻙ ﺣﱴ ﺃﺭﺑﻊ ﺣﺮﻛﺎﺕ‬ ‫ﳑﻜﻨﺔ ﻟﻸﻋﻠﻰ ﻭﺍﻷﺳﻔﻞ ﻭﺍﻟﻴﻤﲔ ﻭﺍﻟﻴﺴﺎﺭ. ﻭﺍﻟﺘﺮﺗﻴﺐ ﺍﻷﻭﻝ ﻭﺍﻷﺧﲑ ﻟﻠﺒﻼﻁ ﻳﻜﻮﻥ ﳏﺪﺩ ﻣﺴﺒﻘﺎ.‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫87‬ ‫ﻭﺍﳍﺪﻑ ﻫﻮ ﲢﺪﻳﺪ ﺃﻱ ﺗﺘﺎﺑﻊ ﺃﻭ ﺃﻗﺼﺮ ﺗﺘﺎﺑﻊ ﻟﻠﺤﺮﻛﺎﺕ ﻳﺆﺩﻱ ﺇﱃ ﺍﻟﺘﺤﻮﻝ ﻣﻦ ﺍﻟﺘﺮﺗﻴـﺐ ﺍﻷﻭﱄ‬ ‫ﺇﱃ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﻨﻬﺎﺋﻲ.‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)31-3( ﳕﻮﺫﺝ ﻟﻠﺘﺮﺗﻴﺐ ﺍﻷﻭﻝ ﻭﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﻨﻬﺎﺋﻲ ﻭﺗﺘﺎﺑﻊ ﺍﳊﺮﻛـﺎﺕ ﻣـﻦ‬ ‫ﺍﻟﺘﺮﺗﻴﺐ ﺍﻷﻭﻝ ﺇﱃ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﻨﻬﺎﺋﻲ.‬ ‫اﻟﺸﻜﻞ)31-3(: ﻣﺴﺄﻟﺔ ُﺡﺠﻴﺔ اﻟﻤﺮﺏﻊ ﺕﻮﺿﺢ اﻟﺘﺮﺕﻴ ﺐ اﻷول )‪ (a‬واﻟﺘﺮﺕﻴ ﺐ اﻟﻨﻬ ﺎﺋﻲ )‪ (d‬وﺕﺘ ﺎﺏﻊ اﻟﺤﺮآ ﺎت ﻣ ﻦ‬ ‫أ‬ ‫اﻟﺘﺮﺕﻴﺐ اﻷول وﺡﺘﻰ اﻟﺘﺮﺕﻴﺐ اﻟﻨﻬﺎﺋﻲ.‬ ‫ﻭﳝﻜﻦ ﺣﻞ ﻣﺴﺄﻟﺔ ﹸﺣﺠﻴﺔ ﺍﳌﺮﺑﻊ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺗﻘﻨﻴﺎﺕ ﺷﺠﺮﺓ ﺍﻟﺒﺤﺚ. ﻓﺒﺪﺀﹰﺍ ﻣـﻦ ﺍﻟﺘﺮﺗﻴـﺐ‬ ‫ﺃ‬ ‫ﺍﻷﻭﻝ ﺗﻜﻮﻥ ﲨﻴﻊ ﺍﻟﺘﺮﺗﻴﺒﺎﺕ ﺍﳌﺘﻔﺮﻋﺔ ﺍﳌﻤﻜﻨﺔ ﻗﺪ ﺃﻧﺘﺠﺖ. ﻭﺭﲟﺎ ﻳﻜﻮﻥ ﻟﺪﻯ ﺍﻟﺘﺮﺗﻴﺐ 2 ﺃﻭ 3 ﺃﻭ‬ ‫4 ﺗﺮﺗﻴﺒﺎﺕ ﻭﺭﻳﺜﺔ ﺃﺧﺮﻯ، ﻛﻞ ﻭﺍﺣﺪ ﻣﻨﻬﺎ ﳛﺘﻞ ﺍﳊﻴﺰ ﺍﻟﻔﺎﺭﻍ ﺍﺠﻤﻟﺎﻭﺭ ﻟﻪ. ﻭﻣﻬﻤﺔ ﺇﳚﺎﺩ ﻣﺴﺎﺭ ﻣﻦ‬ ‫ﺍﻟﺘﺮﺗﻴﺐ ﺍﻷﻭﱄ ﺇﱃ ﺍﻷﺧﲑ ﺗﺘﺤﻮﻝ ﺍﻵﻥ ﺇﱃ ﺍﻟﺒﺤﺚ ﻋﻦ ﻣﺴﺎﺭ ﻣﻦ ﻭﺍﺣﺪ ﻣﻦ ﻫـﺬﻩ ﺍﻟﺘﺮﺗﻴﺒـﺎﺕ‬ ‫ﺍﳉﺪﻳﺪﺓ ﺍﻟﻨﺎﺷﺌﺔ ﺇﱃ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﻨﻬﺎﺋﻲ. ﻭﻧﻈﺮﺍ ﻷﻥ ﻭﺍﺣﺪ ﻣﻦ ﻫﺬﻩ ﺍﻟﺘﺮﺗﻴﺒﺎﺕ ﺍﳉﺪﻳﺪﺓ ﺍﻟﻨﺎﺷﺌﺔ ﳚﺐ‬ ‫ﺃﻥ ﻳﻜﻮﻥ ﻋﻠﻰ ﺑﻌﺪ ﲢﺮﻳﻜﺔ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﳊﻞ، ﻓﺈﻧﻨﺎ ﻧﻜﻮﻥ ﻗﺪ ﺣﻘﻘﻨﺎ ﺗﻘﺪﻣﺎ ﻧﺎﺣﻴﺔ ﺇﳚﺎﺩ ﺍﳊـﻞ.‬ ‫ٍ‬ ‫ﻭﻣﺴﺎﺣﺔ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﻨﺎﲡﺔ ﻋﻦ ﺷﺠﺮﺓ ﺍﻟﺒﺤﺚ ﳝﻜﻦ ﺃﻥ ﻳﺸﺎﺭ ﺇﻟﻴﻬﺎ ﻛﺮﺳﻢ ﳊﺎﻟﺔ ﺍﳌﺴﺎﺣﺔ. ﻭﻛﻞ‬ ‫ﻋﻘﺪﺓ ﰲ ﺍﻟﺮﺳﻢ ﺗﻌﺘﱪ ﺗﺮﺗﻴﺐ ﻭﻛﻞ ﺿﻠﻊ )ﺍﳋﻄﻮﻁ ﺍﻟﻮﺍﺻﻠﺔ ﺑﲔ ﺍﻟﺘﺮﺗﻴﺒﺎﺕ( ﻣﻦ ﺍﻟﺮﺳﻢ ﻳﺼﻞ ﺑﲔ‬ ‫ﺍﻟﺘﺮﺗﻴﺒﺎﺕ ﻓﻴﻤﻜﻦ ﺍﻟﻮﺻﻮﻝ ﲝﺮﻛﺔ ﻭﺍﺣﺪﺓ ﻟﻠﺒﻼﻃﺔ.‬ ‫ﺇﺣﺪﻯ ﺍﻟﻄﺮﻕ ﳊﻞ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺑﺎﻟﺘﻮﺍﺯﻱ ﻫﻲ ﻛﻤﺎ ﻳﻠﻲ:‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫97‬ ‫ﺃﻭﻻ: ﻋﻤﻞ ﻣﺴﺘﻮﻳﺎﺕ ﻗﻠﻴﻠﺔ ﻣﻦ ﺍﻟﺘﺮﺗﻴﺒﺎﺕ ﺑﺸﻜﻞ ﺗﺴﻠﺴﻠﻲ ﺑﺪﺀﹰﺍ ﻣﻦ ﺍﻟﺘﺗﻴﺐ ﺍﻷﻭﻝ ﺣﱴ ﻳﻜﻮﻥ‬ ‫ً‬ ‫ﻟﺪﻯ ﺷﺠﺮﺓ ﺍﻟﺒﺤﺚ ﻋﺪﺩ ﻛﺎﻑ ﻣﻦ ﺍﻟﻌﻘﺪ.‬ ‫ﺛﺎﻧﻴﺎ: ﻧﺒﺪﺃ ﺑﺈﺳﻨﺎﺩ ﻛﻞ ﻋﻘﺪﺓ ﳌﻬﻤﺔ ﻭﺫﻟﻚ ﻟﻌﻤﻞ ﺍﺳﺘﻄﻼﻉ ﺁﺧﺮ ﺣﱴ ﺗﺼﻞ ﺃﺣﺪ ﺍﳌﻬﺎﻡ ﺇﱃ ﺍﳊﻞ.‬ ‫ﹰ‬ ‫ﻭﲟﺠﺮﺩ ﻭﺻﻮﻝ ﺃﺣﺪ ﺍﳌﻬﺎﻡ ﺍﳌﺘﺰﺍﻣﻨﺔ ﺇﱃ ﺍﳊﻞ ﻓﺈﻧﻪ ﳝﻜﻦ ﺃﻥ ﲣﱪ ﺍﻟﺒﻘﻴﺔ ﻹﻳﻘﺎﻑ ﺍﻟﺒﺤﺚ.‬ ‫ﺍﻟﺸﻜﻞ)41-3( ﻳﻮﺿﺢ ﺗﻘﺴﻴﻢ ﺇﱃ ﺃﺭﺑﻌﺔ ﻣﻬﺎﻡ ﻭﺍﻟﺬﻱ ﺗﺘﻮﺻﻞ ﻓﻴﻪ ﺍﳌﻬﻤﺔ ٤ ﺇﱃ ﺍﳊﻞ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫08‬ ‫اﻟﺸﻜﻞ)41-3(: اﻷوﺿﺎع اﻟﺘﻲ ﺕﻨﺘﺞ ﻋﻦ ﻣﺜﺎل ﻟﻤﺴﺄﻟﺔ أﺡﺠﻴﺔ اﻟﻤﺮﺏﻊ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫18‬ ‫ﻻﺣﻆ ﺃﻥ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻗﺪ ﻳﺒﺪﻭ ﻭﻛﺄﻧﻪ ﻣﺸﺎﺑﻪ ﻟﺘﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ) ﳝﻜﻦ ﺍﻟﻨﻈﺮ ﺇﱃ‬ ‫ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ ﻋﻠﻰ ﺃﻬﻧﺎ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﻟﱵ ﰎ ﺗﻘﺴﻴﻤﻬﺎ( ﻭﻟﻜﻦ ﺍﻟﺘﻘﺴﻴﻤﲔ ﳐﺘﻠﻔﲔ ﰲ ﺍﳉﺎﻧﺐ ﺍﻟﺘﺎﱄ؛‬ ‫ﰲ ﺗﻘﺴﻴﻢ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻳﺘﻢ ﺗﻨﻔﻴﺬ ﺍﳌﻬﺎﻡ ﺑﺘﻤﺎﻣﻬﺎ ﻓﻜﻞ ﻣﻬﻤﺔ ﺗﺆﺩﻱ ﺣﺴﺎﺑﺎﺕ ﻣﻔﻴﺪﺓ ﻬﺑـﺪﻑ ﺣـﻞ‬ ‫ﺍﳌﺴﺄﻟﺔ. ﺃﻣﺎ ﰲ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻓﻴﻤﻜﻦ ﺇﻳﻘﺎﻑ ﺗﻨﻔﻴﺬ ﺍﳌﻬﻤﺔ ﻭﻟﻮ ﱂ ﺗﻜﺘﻤﻞ ﻭﺫﻟﻚ ﺣﺎﳌـﺎ‬ ‫ﺗﺘﻮﺻﻞ ﺃﺣﺪ ﺍﳌﻬﺎﻡ ﺇﱃ ﺍﳊﻞ ﺍﻟﻨﻬﺎﺋﻲ. ﻭﻟﺬﺍ ﻓﺎﻥ ﺃﻗﺴﺎﻡ ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ ﺍﳌـﺴﺘﺨﺪﻣﺔ ﻟﻠﺘﺮﻛﻴـﺐ‬ ‫ﺍﳌﺘﻮﺍﺯﻱ ﳝﻜﻦ ﺃﻥ ﺗﻜﻮﻥ ﳐﺘﻠﻔﺔ ﻋﻦ ﺍﻟﱵ ﺍﺳﺘﺨﺪﻣﺖ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ. ﻭﻣﻦ ﺍﳉﺪﻳﺮ ﺫﻛﺮﻩ‬ ‫ﺃﻥ ﺍﻟﻌﻤﻞ ﺍﳌﻨﻔﺬ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﺼﻴﻐﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻗﺪ ﻳﻜﻮﻥ ﺃﻗﻞ ﺃﻭ ﺃﻛﺜﺮ ﻣﻨﻪ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ،‬ ‫ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﳝﻜﻦ ﻣﻼﺣﻈﺔ ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ ﺍﻟﱵ ﰎ ﺗﻘﺴﻴﻤﻬﺎ ﺇﱃ ﺃﺭﺑﻌﺔ ﻣﻬﺎﻡ ﻣﺘﺰﺍﻣﻨﺔ ﻛﻤـﺎ‬ ‫ﻫﻮ ﻣﻮﺿﺢ ﺑﺎﻟﺸﻜﻞ)51-3( . ﻓﺈﺫﺍ ﻛﺎﻥ ﺍﳊﻞ ﻣﻮﺟﻮﺩﺍ ﰲ ﺑﺪﺍﻳﺔ ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ ﻣﺘﻮﺍﻓـﻖ ﻣـﻊ‬ ‫ﺍﳌﻬﻤﺔ 3 ) ﺍﻟﺸﻜﻞ )‪ ( (3-15.a‬ﻓﺈﻧﻪ ﺳﻴﺘﻢ ﺍﻟﻌﺜﻮﺭ ﻋﻠﻴﻪ ﻣﺒﺎﺷﺮﺓ ﰲ ﺍﻟﺼﻴﻐﺔ ﺍﳌﺘﻮﺍﺯﻳـﺔ. ﺃﻣـﺎ ﰲ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻓﺴﻴﺘﻢ ﺍﻟﻌﺜﻮﺭ ﻋﻠﻰ ﺍﳊﻞ ﺑﻌﺪ ﺗﻨﻔﻴﺬ ﻋﻤﻞ ﻣﺘﻜﺎﻓﺊ ﻣﻊ ﺍﻟﺒﺤﺚ ﰲ ﺍﳌﺴﺎﺣﺔ‬ ‫ﺍﻟﻜﻠﻴﺔ ﻟﻠﻤﻬﺎﻡ 1ﻭ 2 . ﻭﻣﻦ ﻧﺎﺣﻴﺔ ﺃﺧﺮﻯ ﺇﺫﺍ ﻛﺎﻥ ﺍﳊﻞ ﻣﻮﺟﻮﺩﺍ ﺑﺎﻟﻘﺮﺏ ﻣﻦ ﻬﻧﺎﻳـﺔ ﻣـﺴﺎﺣﺔ‬ ‫ﺍﻟﺒﺤﺚ ﺍﳌﺘﻮﺍﻓﻖ ﻣﻊ ﺍﳌﻬﻤﺔ 1 ) ﺍﻟﺸﻜﻞ)‪ ( (3-15.b‬ﻓﺴﻮﻑ ﻳﻘﻮﻡ ﺍﻟﺘﺮﻛﻴﺐ ﺍﳌﺘﻮﺍﺯﻱ ﺑﻌﻤﻞ ﺃﺭﺑﻌﺔ‬ ‫ﺃﺿﻌﺎﻑ ﺍﻟﻌﻤﻞ ﺍﻟﺬﻱ ﺗﻘﻮﻡ ﺑﻪ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻭﻟﻦ ﻳﺆﺩﻱ ﺇﱃ ﺯﻳﺎﺩﺓ ﺍﻟﺴﺮﻋﺔ.‬ ‫ﱢ‬ ‫اﻟﺸﻜﻞ)51-3(: ﺕﻮﺿﻴﺢ ﻟﻠﺴﺮﻋﺔ اﻟﻐﻴﺮ ﻣﻨﺘﻈﻤﺔ اﻟﻨﺎﺕﺠﺔ ﻋﻦ اﻟﺘﻘﺴﻴﻢ اﻻﺳﺘﻜﺸﺎﻓﻲ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫28‬ ‫4.3.3 ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﻨﻲ )‪(Speculative Decomposition‬‬ ‫ﻳﺴﺘﺨﺪﻡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﰲ ﺍﻟﱪﺍﻣﺞ ﺍﻟﱵ ﻗﺪ ﺗﺄﺧﺬ ﺗﻔﺮﻉ ﻭﺍﺣﺪ ﻣﻦ ﺑﲔ ﻋﺪﺓ ﺗﻔﺮﻋـﺎﺕ‬ ‫ﺣﺴﺎﺑﻴﺔ ﺍﻋﺘﻤﺎﺩﺍ ﻋﻠﻰ ﻧﺘﺎﺋﺞ ﻟﻌﻤﻠﻴﺎﺕ ﺣﺴﺎﺑﻴﺔ ﺳﺎﺑﻘﺔ. ﻭﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ، ﻓﺈﻧﻪ ﺃﺛﻨﺎﺀ ﺗﻨﻔﻴـﺬ ﺃﺣـﺪ‬ ‫ﺍﳌﻬﺎﻡ ﻟﻌﻤﻠﻴﺔ ﺣﺴﺎﺑﻴﺔ ﳍﺎ ﻧﺎﺗﺞ ﻳﺴﺘﻌﻤﻞ ﰲ ﻋﻤﻠﻴﺔ ﺣﺴﺎﺑﻴﺔ ﻻﺣﻘﺔ، ﻓﻴﻤﻜﻦ ﳌﻬﺎﻡ ﺃﺧﺮﻯ ﺃﻥ ﺗﺒﺪﺃ‬ ‫ﺍﳊﺴﺎﺏ ﻟﻠﻤﺮﺣﻠﺔ ﺍﻟﻘﺎﺩﻣﺔ ﺑﺸﻜﻞ ﻣﺘﺰﺍﻣﻦ. ﻳﺘﺸﺎﺑﻪ ﻫﺬﺍ ﺍﻟﺴﻴﻨﺎﺭﻳﻮ ﻣﻊ ﻋﻤﻠﻴﺔ ﺗﻘﻴﻴﻢ )ﺃﻭ ﺍﺧﺘﺒـﺎﺭ(‬ ‫ﻭﺍﺣﺪ ﺃﻭ ﺃﻛﺜﺮ ﻣﻦ ﺍﻟﺘﻔﺮﻋﺎﺕ ﺍﳋﺎﺻﺔ ﺑﻌﺒﺎﺭﺓ ‪ switch‬ﰲ ﻟﻐﺔ ‪ C‬ﺑﺸﻜﻞ ﻣﺘﺰﺍﻣﻦ ﻭﺫﻟﻚ ﻗﺒـﻞ ﺃﻥ‬ ‫ﻳﻜﻮﻥ ﺍﻟﺪﺧﻞ ﻟﻌﺒﺎﺭﺓ ‪ switch‬ﺟﺎﻫﺰﺍ. ﻭﰲ ﺃﺛﻨﺎﺀ ﺗﻨﻔﻴﺬ ﺃﺣﺪ ﺍﳌﻬﺎﻡ ﻟﻠﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺍﻟﱵ ﺳﺘﺤﻞ‬ ‫‪ switch‬ﻓﻤﻬﺎﻡ ﺃﺧﺮﻯ ﳝﻜﻦ ﺃﻥ ﺗﻠﺘﻘﻂ ﺍﻟﺘﻔﺮﻋﺎﺕ ﺍﳌﺘﻌﺪﺩﺓ ﻟﻌﺒﺎﺭﺓ ‪ switch‬ﺑـﺎﻟﺘﻮﺍﺯﻱ. ﻭﻋﻨـﺪ‬ ‫ﺍﻻﻧﺘﻬﺎﺀ ﻣﻦ ﺣﺴﺎﺏ ﺍﻟﺪﺧﻞ ﻟﻌﺒﺎﺭﺓ ‪ switch‬ﺳﻴﺘﻢ ﺍﺳﺘﺨﺪﺍﻡ ﺍﻟﺘﻔﺮﻉ ﺍﻟﺼﺤﻴﺢ ﺑﻴﻨﻤﺎ ﻳﺘﻢ ﺇﳘـﺎﻝ‬ ‫ﺍﻟﺘﻔﺮﻋﺎﺕ ﺍﻷﺧﺮﻯ. ﺇﻥ ﺯﻣﻦ ﺍﻟﺘﺸﻐﻴﻞ ﺍﳌﺘﻮﺍﺯﻱ ﺃﻗﻞ ﻣﻨﻪ ﰲ ﺍﻟﺘﺴﻠﺴﻠﻲ ﲟﻘﺪﺍﺭ ﺍﻟﺰﻣﻦ ﺍﻟﻼﺯﻡ ﻟﺘﻘﻴﻴﻢ‬ ‫ﺍﻟﺸﺮﻁ ﺍﻟﺬﻱ ﺗﻌﺘﻤﺪ ﻋﻠﻴﻪ ﺍﳌﻬﻤﺔ ﺍﻟﻘﺎﺩﻣﺔ ﺑﺴﺒﺐ ﺃﻥ ﻫﺬﺍ ﺍﻟﺰﻣﻦ ﳝﻜـﻦ ﺍﻻﺳـﺘﻔﺎﺩﺓ ﻣﻨـﻪ ﻷﺩﺍﺀ‬ ‫ﺣﺴﺎﺑﺎﺕ ﻣﻔﻴﺪﺓ ﻟﻠﻤﺮﺣﻠﺔ ﺍﻟﺘﺎﻟﻴﺔ ﺑﺎﻟﺘﻮﺍﺯﻱ. ﻭﻋﻠﻰ ﺃﻱ ﺣﺎﻝ ﻓﺈﻥ ﺻﻴﻐﺔ ﺍﻟﺘﻮﺍﺯﻱ ﻟﻌﺒﺎﺭﺓ ‪switch‬‬ ‫ﺗﺆﺩﻱ ﻟﺒﻌﺾ ﺍﳍﺪﺭ ﰲ ﺍﳊﺴﺎﺏ. ﻭﻣﻦ ﺃﺟﻞ ﺇﻗﻼﻝ ﻫﺬﺍ ﺍﳍﺪﺭ ﰲ ﺍﳊﺴﺎﺏ ﻓﺈﻧﻪ ﳝﻜﻦ ﺍﺳـﺘﻌﻤﺎﻝ‬ ‫ﺻﻴﻐﺔ ﻣﻌﺪﻟﺔ ﻗﻠﻴﻼ ﻣﻦ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﻭﺧﺼﻮﺻﺎ ﰲ ﺍﻷﻭﺿﺎﻉ ﺍﻟﱵ ﻳﻜﻮﻥ ﻓﻴﻬﺎ ﺃﺣﺪ ﺍﻟﻨﺘـﺎﺋﺞ‬ ‫ﺃﻛﺜﺮ ﺍﺣﺘﻤﺎﻻ ﻣﻦ ﺍﻟﻨﺘﺎﺋﺞ ﺍﻷﺧﺮﻯ. ﻭﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﺳﻴﺘﻢ ﺃﺧﺬ ﺍﻟﺘﻔﺮﻉ ﺍﻷﻛﺜﺮ ﺍﺣﺘﻤﺎﻻ ﺑﺎﻟﺘﻮﺍﺯﻱ‬ ‫ﹰ‬ ‫ﻣﻊ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺍﻟﺴﺎﺑﻘﺔ. ﻭﰲ ﺣﺎﻝ ﻛﺎﻧﺖ ﺍﻟﻨﺘﺎﺋﺞ ﳐﺘﻠﻔﺔ ﻋﻦ ﺍﳌﺘﻮﻗﻊ ﻓﺈﻥ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊـﺴﺎﺑﻴﺔ‬ ‫ﺗﺮﺗﺪ ﻟﻠﻮﺭﺍﺀ ﻭﻳﺘﻢ ﺍﺧﺘﻴﺎﺭ ﺍﻟﺘﻔﺮﻉ ﺍﻟﺼﺤﻴﺢ.‬ ‫ﻭﺍﻟﺘﺴﺮﻳﻊ ﺍﻟﺬﻱ ﳝﻜﻦ ﺃﻥ ﻳﻨﺘﺞ ﻋﻦ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﳝﻜﻦ ﺃﻥ ﻳﻜﻮﻥ ﻛﺒﲑﹰﺍ ﺇﺫﺍ ﻛﺎﻥ ﻫﻨﺎﻟﻚ‬ ‫ﻣﺮﺍﺣﻞ ﲣﻤﻴﻨﻴﺔ ﻣﺘﻌﺪﺩﺓ. ﻭﻛﻤﺜﺎﻝ ﻟﺘﻄﺒﻴﻖ ﻳﻜﻮﻥ ﻓﻴﻪ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﻧﺎﻓﻌﺎ : ﳏﺎﻛﺎﺓ ﺍﳊـﺪﺙ‬ ‫ﺍﳌﺘﻘﻄﻊ، ﻭ ﺳﻌﻄﻲ ﶈﺔ ﻣﺒﺴﻄﺔ ﳍﺬﻩ ﺍﳌﺴﺄﻟﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫38‬ ‫اﻟﻤﺜﺎل)7-3(: ﺕﻮازي ﻣﺤﺎآﺎة اﻟﺤﺪث اﻟﻤﺘﻘﻄﻊ‬ ‫ﺑﺎﻋﺘﺒﺎﺭ ﺃﻥ ﻟﺪﻳﻨﺎ ﳏﺎﻛﺎﺓ ﻟﻨﻈﺎﻡ ﳑﺜﻞ ﺑﺸﺒﻜﺔ ﺃﻭ ﺑﻴﺎﻥ ﻣﻮﺟﻪ، ﲝﻴﺚ ﲤﺜﻞ ﺍﻟﻌﻘـﺪ ﰲ ﻫـﺬﻩ‬ ‫ﺍﻟﺸﺒﻜﺔ ﻋﻨﺎﺻﺮ ، ﻭﻛﻞ ﻋﻨﺼﺮ ﻟﻪ ﻋﺎﺯﻝ ﻣﺪﺧﻼﺕ ﻟﻠﻮﻇﺎﺋﻒ ، ﻭﺍﳊﺎﻟﺔ ﺍﻷﻭﻟﻴﺔ ﻟﻜﻞ ﻋﻨـﺼﺮ ﺃﻭ‬ ‫ﻋﻘﺪﻩ ﺗﻜﻮﻥ ﻋﺎﻃﻠﺔ ، ﻛﻞ ﻋﻨﺼﺮ ﻋﺎﻃﻞ ﻳﻘﻮﻡ ﺑﺄﺧﺬ ﺍﻟﻮﻇﺎﺋﻒ ﻣﻦ ﻃﺎﺑﻮﺭ ﺍﳌﺪﺧﻼﺕ ، ﻓﺈﺫﺍ ﻛﺎﻥ‬ ‫ﻫﻨﺎﻟﻚ ﻭﻇﻴﻔﺔ ﻓﺈﻧﻪ ﻳﻌﺎﳉﻬﺎ ﺑﻜﻤﻴﺔ ﳏﺪﺩﺓ ﻣﻦ ﺍﻟﻮﻗﺖ ، ﻭﻣﻦ ﰒ ﻳﻀﻌﻬﺎ ﰲ ﻋﺎﺯﻝ ﺍﻟﺪﺧﻞ ﻟﻠﻌﻨﺼﺮ‬ ‫ﺍﳌﺮﺗﺒﻂ ﻣﻌﻪ ﻣﻦ ﻃﺮﻓﻪ ﺍﳋﺎﺭﺝ، ﻗﺪ ﻳﻨﺘﻈﺮ ﺍﻟﻌﻨﺼﺮ ﺑﻌﺾ ﺍﻟﻮﻗﺖ ﺇﺫﺍ ﻛﺎﻥ ﻋﺎﺯﻝ ﺍﳌﺪﺧﻼﺕ ﻷﺣﺪ‬ ‫ﺟﲑﺍﻧﻪ ﻏﲑ ﻓﺎﺭﻍ ﻭﻳﻜﻮﻥ ﺍﻻﻧﺘﻈﺎﺭ ﺣﱴ ﻳﻘﻮﻡ ﺍﳉﺎﺭ ﺑﺎﻟﺘﻘﺎﻁ ﺍﻟﻮﻇﻴﻔﺔ ﻟﻴﺘﺮﻙ ﻣﺴﺎﺣﺔ ﻓﺎﺭﻏـﺔ ﰲ‬ ‫ﺍﻟﻌﺎﺯﻝ. ﻫﻨﺎﻟﻚ ﻋﺪﺩ ﳏﺪﺩ ﻣﻦ ﺃﻧﻮﺍﻉ ﻭﻇﺎﺋﻒ ﺍﳌﺪﺧﻼﺕ. ﳐﺮﺟﺎﺕ ﺍﻟﻌﻨﺼﺮ ﻭﺍﻟﻮﻗـﺖ ﺍﻟـﻼﺯﻡ‬ ‫ﳌﻌﺎﳉﺔ ﺍﻟﻮﻇﻴﻔﺔ ﻫﻮ ﺍﻟﻌﻤﻞ ﻟﻮﻇﻴﻔﺔ ﺍﳌﺪﺧﻼﺕ .‬ ‫ﺍﳌﺴﺄﻟﺔ : ﳏﺎﻛﺎﺓ ﻋﻤﻞ ﺍﻟﺸﺒﻜﺔ ﻟﺴﻠﺴﻠﺔ ﻣﻌﻄﺎﺓ ﻣﻦ ﺍ ﻟﻮﻇﺎﺋﻒ ﺍﻟﺪﺍﺧﻠﺔ ﻭﺣﺴﺎﺏ ﺇﲨﺎﱄ ﺍﻟﻮﻗـﺖ‬ ‫ﻭﺍﳌﻈﺎﻫﺮ ﺍﻷﺧﺮﻯ ﺍﶈﺘﻤﻠﺔ ﻟﺴﻠﻮﻙ ﺍﻟﻨﻈﺎﻡ . ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)61-3( ﺷﺒﻜﺔ ﻣﺒﺴﻄﺔ ﳊـﻞ ﻣـﺴﺄﻟﺔ‬ ‫ﺍﳊﺪﺙ ﺍﳌﺘﻘﻄﻊ.‬ ‫اﻟﺸﻜﻞ)61-3(:ﺷﺒﻜﺔ ﻣﺒﺴﻄﺔ ﻟﻤﺤﺎآﺎة اﻟﺤﺪث اﻟﻤﺘﻘﻄﻊ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫48‬ ‫ﻭﻣﺴﺄﻟﺔ ﳏﺎﻛﺎﺓ ﺗﺘﺎﺑﻊ ﻣﺪﺧﻞ ﺍﻟﻮﻇﺎﺋﻒ ﻋﻠﻰ ﺍﻟﺸﺒﻜﺔ ﺍﳌﻮﺻﻮﻓﺔ ﰲ ﺍﳌﺜﺎﻝ )7-3( ﻳﺒﺪﻭﺍ ﻭﻛﺄﻬﻧـﺎ‬ ‫ﺃﺳﺎﺳﺎ ﻣﺘﺘﺎﺑﻌﺔ ﻷﻥ ﺍﳌﺪﺧﻞ ﻟﻨﻔﺲ ﺍﳌﻜﻮﻥ ﻫﻮ ﺍﳌﺨﺮﺝ ﻟﻸﺧﺮ. ﻭﻣﻊ ﻫﺬﺍ ﻓﻴﻤﻜﻨﻨﺎ ﺍﻟﺘﻌﺮﻳﻒ ﳌﻬـﺎﻡ‬ ‫ﹰ‬ ‫ﲣﻤﻴﻨﻴﺔ ﺗﺒﺪﺃ ﲟﺤﺎﻛﺎﺓ ﺍﳉﺰﺀ ﺍﻟﻔﺮﻋﻲ ﻣﻦ ﺍﻟﺸﺒﻜﺔ، ﻭﻛﻞ ﻣﻨﻬﺎ ﳝﺜﻞ ﻣﺪﺧﻼﺕ ﻋﺪﻳﺪﺓ ﳑﻜﻨـﺔ ﰲ‬ ‫ﻫﺬﻩ ﺍﳌﺮﺣﻠﺔ. ﻭﻋﻨﺪﻣﺎ ﻳﺼﺒﺢ ﺍﳌﺪﺧﻞ ﺍﻟﻔﻌﻠﻲ ﳌﺮﺣﻠﺔ ﻣﺎ ﻣﺘﻮﻓﺮ ) ﻧﺘﻴﺠﺔ ﻟﺘﻜﻤﻠﺔ ﻣﻬﻤﺔ ﺍﺧﺘﻴﺎﺭ ﻣﻦ‬ ‫ﺍﳌﺮﺣﻠﺔ ﺍﻟﺴﺎﺑﻘﺔ( ﻋﻨﺪﺋﺬ ﻳﺘﻢ ﺇﻬﻧﺎﺀ ﺍﻟﻌﻤﻞ ﺍﳌﻄﻠﻮﺏ ﶈﺎﻛﺎﺓ ﻫﺬﺍ ﺍﳌـﺪﺧﻞ ﺇﺫﺍ ﻛـﺎﻥ ﺍﻟـﺘﺨﻤﲔ‬ ‫ﺻﺤﻴﺤﺎً, ﺃﻭ ﻳﻌﺎﺩ ﺑﺪﺀ ﳏﺎﻛﺎﺓ ﻫﺬﻩ ﺍﳌﺮﺣﻠﺔ ﻣﻊ ﺍﳌﺪﺧﻞ ﺍﻷﺻﺢ ﺇﺫﺍ ﻛﺎﻥ ﺍﻟﺘﺨﻤﲔ ﻏﲑ ﺻﺤﻴﺢ.‬ ‫ﻭﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﳜﺘﻠﻒ ﻋﻦ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﰲ ﺍﻟﻨﻘﺎﻁ ﺍﻟﺘﺎﻟﻴﺔ:‬ ‫ ﰲ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﻳﻜﻮﻥ ﻣﺪﺧﻞ ﺍﻟﻔﺮﻉ ﺍﻟﺬﻱ ﻳﺆﺩﻯ ﺇﱃ ﻋﺪﺓ ﻣﻬﺎﻡ ﻣﺘﻮﺍﺯﻳـﺔ ﻏـﲑ‬‫ﻣﻌﺮﻑ، ﺑﻴﻨﻤﺎ ﰲ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﻳﻜﻮﻥ ﳐﺮﺝ ﺍﳌﻬﺎﻡ ﺍﻟﻨﺎﺗﺞ ﻣﻦ ﺍﻟﻔﺮﻉ ﻏﲑ ﻣﻌﺮﻭﻑ.‬ ‫ ﰲ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﺗﺆﺩﻱ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻣﻬﻤﺔ ﻭﺍﺣﺪﺓ ﳏـﺪﺩﺓ ﰲ ﻣﺮﺣﻠـﺔ‬‫ﺍﻟﺘﺨﻤﲔ ﻷﻬﻧﺎ ﻋﻨﺪﻣﺎ ﺗﺼﻞ ﺑﺪﺍﻳﺔ ﺍﳌﺮﺣﻠﺔ ﻓﺈﻬﻧﺎ ﺗﻌﺮﻑ ﺑﺎﻟﻀﺒﻂ ﻣﺎ ﺍﻟﻔﺮﻉ ﺍﻟﺬﻱ ﺗﺄﺧﺬﻩ. ﻭﺑﺎﳌﺒﺎﺩﺭﺓ‬ ‫ﲝﺴﺎﺏ ﺇﻣﻜﺎﻧﻴﺎﺕ ﺍﻟﺘﻀﺎﻋﻒ ﺍﻟﱵ ﻳﺘﺤﻘﻖ ﻭﺍﺣﺪ ﻣﻨﻬﺎ ﻓﻘﻂ ، ﻭﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘـﻮﺍﺯﻱ ﺍﳌـﺴﺘﺨﺪﻡ‬ ‫ﻟﻠﺘﻘﺴﻴﻢ ﺍﻟﺘﺨﻤﻴﲏ ﻳﺆﺩﻱ ﻋﻤﻞ ﻭﺍﺣﺪ ﳎﻤﻞ ﺃﻛﺜﺮ ﻣﻦ ﻧﻈﺮﻳﺔ ﺍﳌﺴﻠﺴﻞ.‬ ‫ ﻭﺣﱴ ﻟﻮ ﰎ ﺍﺳﺘﻜﺸﺎﻑ ﺃﺣﺪ ﺍﻹﻣﻜﺎﻧﻴﺎﺕ ﺑﺎﻟﺘﺨﻤﲔ ﻓﺈﻥ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﳝﻜـﻦ ﺃﻥ‬‫ﺗﺆﺩﻱ ﻗﺪﺭﹰﺍ ﺃﻛﱪ ﺃﻭ ﻧﻔﺲ ﺍﻟﻘﺪﺭ ﻣﻦ ﺍﻟﻌﻤﻞ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ.‬ ‫ ﻭﻣﻦ ﻧﺎﺣﻴﺔ ﺃﺧﺮﻯ ﻓﻔﻲ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻻﺳﺘﻜﺸﺎﰲ ﳝﻜﻦ ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﺍﺳﺘﻜﺸﺎﻑ‬‫ﺑﺪﺍﺋﻞ ﳐﺘﻠﻔﺔ ﻭﺍﺣﺪﺓ ﺗﻠﻮ ﺍﻷﺧﺮﻯ ﻷﻥ ﺍﻟﻔﺮﻉ ﺍﻟﺬﻱ ﳝﻜﻦ ﺃﻥ ﻳﻮﺻﻞ ﺇﱃ ﺍﳊﻞ ﻏـﲑ ﻣﻌـﺮﻭﻑ‬ ‫ﻣﺴﺒﻘﺎ. ﻭﻟﺬﻟﻚ ﻓﺎﻥ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﳝﻜﻦ ﺃﻥ ﻳﺆﺩﻯ ﻧﻔﺲ ﺍﻟﻌﻤﻞ ﺍﺠﻤﻟﻤﻞ ﺃﻭ ﺃﻗـﻞ ﺃﻭ ﺃﻛﺜـﺮ‬ ‫ﺑﺎﳌﻘﺎﺭﻧﺔ ﺑﺎﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﺍﳌﻌﺘﻤﺪﺓ ﻋﻠﻰ ﻣﻮﻗﻊ ﺍﳊﻞ ﰲ ﻣﺴﺎﺣﺔ ﺍﻟﺒﺤﺚ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫58‬ ‫5.3.3‬ ‫ﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﺨﺘﻠﻂ )‪(Hybrid Decompositions‬‬ ‫ﻧﻮﻗﺶ ﻋﺪﺩ ﻣﻦ ﻃﺮﻕ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﱵ ﳝﻜﻦ ﺍﺳﺘﺨﺪﺍﻣﻬﺎ ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻴﻎ ﻣﺘﺰﺍﻣﻨﺔ ﻟﻠﻌﺪﻳـﺪ‬ ‫ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ، ﻭﺗﻘﻨﻴﺎﺕ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﱵ ﻣﺮﺕ ﻣﻌﻨﺎ ﻟﻴﺴﺖ ﺣﺼﺮﻳﺔ ﺍﻻﺳﺘﺨﺪﺍﻡ ﺃﻱ ﺃﻧﻪ ﳝﻜﻦ –‬ ‫ﰲ ﺍﻟﻐﺎﻟﺐ- ﺃﻥ ﻳﺘﻢ ﲨﻌﻬﺎ ﻣﻌﺎ، ﻭﻏﺎﻟﺒﺎ ﻓﺈﻥ ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﺗﻜﻮﻥ ﻣﺮﻛﺒﺔ ﻣﻦ ﻋﺪﺓ ﻣﺮﺍﺣﻞ،‬ ‫ﹰ‬ ‫ﻭﰲ ﺑﻌﺾ ﺍﻷﺣﻴﺎﻥ ﻳﻜﻮﻥ ﻣﻦ ﺍﻟﻀﺮﻭﺭﻱ ﺗﻄﺒﻴﻖ ﺃﻧﻮﺍﻉ ﳐﺘﻠﻔﺔ ﻣﻦ ﺍﻟﺘﻘﺴﻴﻢ ﰲ ﺍﳌﺮﺍﺣﻞ ﺍﳌﺨﺘﻠﻔﺔ،‬ ‫ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ ﻋﻨﺪ ﺍﻟﺒﺤﺚ ﻋﻦ ﺍﻟﻌﺪﺩ ﺍﻷﺻﻐﺮ ﺿﻤﻦ ﳎﻤﻮﻋﺔ ﻛﺒﲑﺓ ﻣﻜﻮﻧﺔ ﻣﻦ ‪ N‬ﻋﺪﺩ ﻓﺈﻥ‬ ‫ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ ﲤﺎﻣﺎ ﻗﺪ ﻳﻨﺘﺞ ﻋﻨﻪ ﻣﻬﺎﻡ ﺃﻛﺜﺮ ﻣﻦ ﻋﺪﺩ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ‪ P‬ﺍﳌﺘﻮﻓﺮﺓ)ﳝﻜـﻦ ﺍﻋﺘﺒـﺎﺭ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺔ ﻋﻠﻰ ﺃﻬﻧﺎ ﻣﻌﺎﰿ(، ﻟﺬﻟﻚ ﻓﺈﻥ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻜﻔﺆ ﻳﻜﻮﻥ ﺑﺘﻘﺴﻴﻢ ﺍﳌـﺪﺧﻼﺕ ﺇﱃ ‪ P‬ﺟـﺰﺀ‬ ‫ﻣﺘﺴﺎﻭﻱ ﻭﳚﻌﻞ ﻛﻞ ﻣﻬﻤﺔ ﺗﻘﻮﻡ ﲝﺴﺎﺏ ﺍﻟﻌﺪﺩ ﺍﻷﺻﻐﺮ ﺿﻤﻦ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﱵ ﺧﺼـﺼﺖ ﳍـﺎ.‬ ‫ﻭﺑﻌﺪ ﺫﻟﻚ ﳝﻜﻦ ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﺍﻟﻨﺎﺗﺞ ﺍﻟﻨﻬﺎﺋﻲ ﺑﺈﳚﺎﺩ ﺍﻟﻌﺪﺩ ﺍﻷﺻﻐﺮ ﻣﻦ ﺍﻟﻨﺘﺎﺋﺞ ﺍﳌﺘﻮﺳﻄﺔ ﻟـ ‪P‬‬ ‫ﻭﺫﻟﻚ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ،ﻛﻤﺎ ﰲ ﺍﻟﺸﻜﻞ )71-3(.‬ ‫اﻟﺸﻜﻞ)71-3( اﻟﺘﻘﺴﻴﻢ اﻟﻤﺨﺘﻠﻂ ﻹﻳﺠﺎد اﻟﻌﺪد اﻷﺹﻐﺮ ﻟﻤﺼﻔﻮﻓﺔ ﻣﻦ اﻟﺤﺠﻢ ٦١ ﺏﺎﺳﺘﺨﺪام أرﺏﻌﺔ ﻣﻬﺎم.‬ ‫ﻣﺜﺎﻝ ﺁﺧﺮ ﻟﺘﻮﺿﻴﺢ ﻓﻜﺮﺓ ﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﺨﺘﻠﻂ. ﺑﻔﺮﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺗﻨﻔﻴﺬ ﺍﻟﻔﺮﺯ ﺍﻟﺴﺮﻳﻊ ﺑـﺸﻜﻞ‬ ‫ﻣﺘﺰﺍﻣﻦ. ﰲ ﺍﳌﺜﺎﻝ)3-3( ﺍﻟﺬﻱ ﺃﻭﺭﺩﻧﺎﻩ ﻋﻨﺪ ﺷﺮﺡ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ ﺍﺳﺘﺨﺪﻣﻨﺎ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌـﻮﺩﻱ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫68‬ ‫ﻻﺷﺘﻘﺎﻕ ﺻﻴﻐﺔ ﻣﺘﺰﺍﻣﻨﺔ ﻣﻦ ﺍﻟﻔﺮﺯ ﺍﻟﺴﺮﻳﻊ. ﻳﻨﺘﺞ ﻋﻦ ﻫﺬﻩ ﺍﻟﺼﻴﻐﺔ ﻋﺪﺩ )‪ O(n‬ﻣﻬﺎﻡ ﻟﻔﺮﺯ ﺳﻠﺴﻠﺔ‬ ‫ﻣﻦ ﺍﳊﺠﻢ ‪ .n‬ﻭﻟﻜﻦ ﺑﺴﺒﺐ ﺍﻋﺘﻤﺎﺩ ﻫﺬﻩ ﺍﳌﻬﺎﻡ ﻋﻠﻰ ﺑﻌﻀﻬﺎ ﻭﻋﺪﻡ ﺗﺴﺎﻭﻱ ﺍﳊﺠﻢ ﺑﻴﻨﻬﺎ ﻓـﺈﻥ‬ ‫ﺍﻟﺘﺰﺍﻣﻦ ﺍﻟﻔﻌﺎﻝ ﻳﻌﺘﱪ ﳏﺪﻭﺩ ﺟﺪﺍ. ﻓﻌﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﺃﻭﻝ ﻣﻬﻤﺔ ﻟﺘﻘﺴﻴﻢ ﻗﺎﺋﻤﺔ ﺍﳌـﺪﺧﻼﺕ ﺇﱃ‬ ‫ﻗﺴﻤﲔ ﺗﺴﺘﻐﺮﻕ ﻣﺪﺓ )‪ ،O(n‬ﻭﺍﻟﱵ ﺗﻀﻊ ﺍﳊﺪ ﺍﻷﻋﻠﻰ ﳌﺴﺘﻮﻯ ﺍﻷﺩﺍﺀ ﺍﶈﺘﻤﻞ ﻋﻦ ﻃﺮﻳﻖ ﺍﻟﺘﻮﺍﺯﻱ.‬ ‫ﻭﻟﻜﻦ ﺧﻄﻮﺓ ﺗﻘﺴﻴﻢ ﺍﻟﻘﻮﺍﺋﻢ ﺍﻟﱵ ﻳﺘﻢ ﺃﺩﺍﺋﻬﺎ ﲟﻬﺎﻡ ﺍﻟﺘﻮﺍﺯﻱ ﻣﻦ ﺍﻟﻨﻮﻉ ﺍﻟﺴﺮﻳﻊ ﳝﻜﻦ ﺗﻘـﺴﻴﻤﻬﺎ‬ ‫ﺑﺎﺳﺘﺨﺪﺍﻡ ﺃﺳﻠﻮﺏ ﺗﻘﺴﻴﻢ ﺍﳌﺪﺧﻼﺕ ﺍﻟﺬﻱ ﲤﺖ ﻣﻨﺎﻗﺸﺘﻪ، ﻭﺍﻟﺘﻘﺴﻴﻢ ﺍﳌﺨﺘﻠﻂ ﺍﻟﻨﺎﺗﺞ ﰎ ﻓﻴﻪ ﺍﳉﻤﻊ‬ ‫ﺑﲔ ﺍﻟﺘﻘﺴﻴﻢ ﺍﻟﻌﻮﺩﻱ ﻭﺗﻘﺴﻴﻢ ﺑﻴﺎﻧﺎﺕ ﺍﳌﺪﺧﻼﺕ ﻭﻫﺬﺍ ﻳﺆﺩﻯ ﺇﱃ ﺻﻴﻐﺔ ﻣﺘﺰﺍﻣﻨﺔ ﺑﺪﺭﺟﺔ ﻛـﺒﲑﺓ‬ ‫ﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﺴﺮﻳﻊ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫78‬ ‫4.3 أﻣﺜﻠﺔ ﻟﻠﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫ﺳﻮﻑ ﻧﺘﻄﺮﻕ ﻟﺒﻌﺾ ﺃﻣﺜﻠﺔ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻭﺳﻮﻑ ﺗﻜﻮﻥ ﺍﻟﻄﺮﻳﻘﺔ ﺍﻟﻌﺎﻣﺔ ﻟﻌـﺮﺽ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻫﻲ: ﻋﺮﺽ ﺍﻟﺼﻴﻐﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺔ ﰒ ﻣﻨﺎﻗﺸﺔ ﻛﻴﻔﻴﺔ ﺟﻌﻠﻬﺎ ﻣﺘﻮﺍﺯﻳﺔ.‬ ‫1.4.3 ﺥﻮارزﻣﻴﺔ اﻟﻔﺮز اﻟﻔﻘﺎﻋﻲ وﺕﻮاﺏﻌﻬﺎ )‪.(Bubble Sort‬‬ ‫ﺗﻘﻮﻡ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻟﻠﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﺃﻭ ﻛﻤﺎ ﻳﺴﻤﻰ ﺃﺣﻴﺎﻧﺎ ﺑﺎﻟﻔﺮﺯ ﺑﺎﻟﺘﻌﻮﱘ )‬ ‫ﹰ‬ ‫‪ (sort‬ﲟﻘﺎﺭﻧﺔ ﻭﺍﺳﺘﺒﺪﺍﻝ ﺍﻟﻌﻨﺎﺻﺮ ﺍﳌﺘﺠﺎﻭﺭﺓ ﰲ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﱵ ﺳﺘﺮﺗﺐ. ﻟﻴﻜﻦ ﻟﺪﻳﻨﺎ ﺍﻟﺴﻠﺴﻠﺔ > ,1‪a‬‬ ‫‪ ،< a2, ..., an‬ﺗﻘﻮﻡ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺃﻭﻻ ﺑﺈﺟﺮﺍﺀ 1-‪ n‬ﻋﻤﻠﻴﺔ "ﻣﻘﺎﺭﻧﺔ-ﻭﺍﺳﺘﺒﺪﺍﻝ" ﰲ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﺘﺎﱄ:‬ ‫ﹰ‬ ‫)‪ .(a1, a2), (a2, a3), ..., (an-1, an‬ﻫﺬﻩ ﺍﳋﻄﻮﺓ ﺗﺰﻳﺢ ﺍﻟﻌﻨﺼﺮ ﺍﻷﻛﱪ ﺇﱃ ﻬﻧﺎﻳﺔ ﺍﻟﺴﻠﺴﻠﺔ. ﺑﻌﺪ ﺫﻟﻚ‬ ‫ﺳﻴﻢ ﲡﺎﻫﻞ ﺍﻟﻌﻨﺼﺮ ﺍﻷﺧﲑ ﻷﻧﻪ ﺃﺧﺬ ﺍﻟﺘﺮﺗﻴﺐ ﺍﻟﺼﺤﻴﺢ ﻟﻪ، ﰒ ﺳﻴﺘﻢ ﺗﻜـﺮﺍﺭ ﺇﻋـﺎﺩﺓ ﻋﻤﻠﻴـﺔ‬ ‫ﺍﳌﻘﺎﺭﻧﺔ-ﻭﺍﻻﺳﺘﺒﺪﺍﻝ ﻋﻠﻰ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﻨﺎﲡﺔ ﻭﰲ ﻛﻞ ﺗﻜﺮﺍﺭ ﻳﺘﻢ ﺇﺯﺍﺣﺔ ﺍﻟﻌﻨﺼﺮ ﺍﻷﻛﱪ ﺇﱃ ﺁﺧـﺮ‬ ‫ﻣﻮﺿﻊ ﰲ ﺍﻟﺴﻠﺴﻠﺔ ﱂ ﻳﺘﻢ ﲡﺎﻫﻠﻪ. ﻭﺳﺘﻜﻮﻥ ﺍﻟﺴﻠﺴﻠﺔ ﻣﺮﺗﺒﺔ ﺑﻌﺪ ﻋﺪﺩ 1-‪ n‬ﻣـﻦ ﺍﻟﺘﻜـﺮﺍﺭﺍﺕ.‬ ‫ﳝﻜﻦ ﻟﻨﺎ ﲢﺴﲔ ﺃﺩﺍﺀ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﻭﺫﻟﻚ ﺑﺎﻹﻬﻧﺎﺀ ﻋﻨﺪﻣﺎ ﻻ ﻳﻜﻮﻥ ﻫﻨﺎﻙ ﺍﺳـﺘﺒﺪﺍﻝ‬ ‫ﺧﻼﻝ ﺍﻟﺘﻜﺮﺍﺭ. ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﻣﻌﺮﻭﺿﺔ ﰲ ﺧﻮﺍﺭﺯﻣﻴﺔ )3-3(، ﻣﻊ ﻣﻼﺣﻈﺔ ﺃﻥ ﺍﻟﻌﺒﺎﺭﺓ‬ ‫ﹼ‬ ‫‪ compare-exchange‬ﻳﻘﺼﺪ ﻬﺑﺎ ﻣﻘﺎﺭﻧﺔ ﺍﻟﻌﻨﺼﺮﻳﻦ ﺍﻟﺬﻳﻦ ﻣﺮﺭﹰﺍ ﳍﺎ ﻓﺈﺫﺍ ﱂ ﻳﻜﻦ ﺗﺮﺗﻴﺒﻬﻤﺎ ﺳـﻠﻴﻤﺎ‬ ‫ﺣﻴﻨﻬﺎ ﻳﺘﻢ ﺍﺳﺘﺒﺪﺍﳍﻤﺎ.‬ ‫‪bubble‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫88‬ ‫ﺇﻥ ﺍﻟﺘﻜﺮﺍﺭ ﺿﻤﻦ ﺍﳊﻠﻘﺔ ﺍﻟﺪﺍﺧﻠﻴﺔ ﰲ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﺗﺄﺧﺬ ﻣﻦ ﺍﻟﻮﻗـﺖ )‪،Θ(n‬‬ ‫ﻭﻳﺘﻢ ﺃﺩﺍﺀ ﻣﺎ ﳎﻤﻮﻋﻪ )‪ Θ(n‬ﺗﻜﺮﺍﺭ)ﺑﺴﺒﺐ ﺍﳊﻠﻘﺔ ﺍﳋﺎﺭﺟﻴﺔ(، ﻭﺑﺎﻟﺘﺎﱄ ﺳﺘﻜﻮﻥ ﺩﺭﺟﺔ ﺍﻟﺘﻌﻘﻴـﺪ‬ ‫ﻟﻠﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﻣﺴﺎﻭﻳﺔ ﺇﱃ )2‪.Θ(n‬‬ ‫ﻣﻦ ﺍﻟﺼﻌﻮﺑﺔ ﲟﻜﺎﻥ ﺟﻌﻞ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﻣﺘﻮﺍﺯﻳﺔ، ﻭﻟﱪﻫﺎﻥ ﺫﻟﻚ، ﻓﻜﺮ ﻛﻴـﻒ‬ ‫ﺳﻴﺘﻢ ﺃﺩﺍﺀ ﻋﻤﻠﻴﺎﺕ ﺍﳌﻘﺎﺭﻧﺔ-ﻭﺍﻻﺳﺘﺒﺪﺍﻝ ﺃﺛﻨﺎﺀ ﻛﻞ ﻣﺮﺣﻠﺔ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ )ﺍﻟﺴﻄﺮﻳﻦ 4 ﻭ 5 ﻣﻦ‬ ‫ﺧﻮﺍﺭﺯﻣﻴﺔ 3-3(. ﺗﻘﻮﻡ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﲟﻘﺎﺭﻧﺔ ﲨﻴﻊ ﺍﻷﺯﻭﺍﺝ ﺍﳌﺘﺠـﺎﻭﺭﺓ ﺑﺎﻟﺘﺮﺗﻴـﺐ؛‬ ‫ﻭﳍﺬﺍ ﺍﻟﺴﺒﺐ ﻓﻬﻲ ﺑﺎﻟﺪﺭﺟﺔ ﺍﻷﻭﱃ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺗﺴﻠﺴﻠﻴﺔ. ﻭﰲ ﺍﻟﻘﺴﻢ ﺍﻟﺘﺎﱄ ﺳﻨﻌﺮﺽ ﺃﺣﺪ ﺃﻧﻮﺍﻉ‬ ‫ﺍﻟﻔﺮﺯ ﺍﻟﻔﻘﺎﻋﻲ ﻭﺍﻟﱵ ﺑﺎﻹﻣﻜﺎﻥ ﺃﻥ ﻳﺘﻢ ﺟﻌﻠﻬﺎ ﻣﺘﻮﺍﺯﻳﺔ.‬ ‫ﺥﻮارزﻣﻴﺔ )3-3(: ﺥﻮارزﻣﻴﺔ اﻟﻔﺮز اﻟﻔﻘﺎﻋﻲ اﻟﺘﺴﻠﺴﻠﻲ.‬ ‫;)1 +‬ ‫)‪procedure BUBBLE_SORT(n‬‬ ‫‪begin‬‬ ‫‪for i := n - 1 downto 1 do‬‬ ‫‪for j := 1 to i do‬‬ ‫‪compare-exchange(aj, aj‬‬ ‫‪end BUBBLE_SORT‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫1.1.4.3 اﻹﺏﺪال اﻟﺰوﺟﻲ-اﻟﻔﺮدي )‪(Odd-Even Transposition‬‬ ‫ﺗﻘﻮﻡ ﺧﻮﺍﺭﺯﻣﻴﺔ "ﺍﻹﺑﺪﺍﻝ ﺍﻟﺰﻭﺟﻲ-ﺍﻟﻔﺮﺩﻱ" ﺑﻔﺮﺯ ‪ n‬ﻋﻨﺼﺮ ﰲ ‪ n‬ﻣﺮﺣﻠﺔ )‪ n‬ﺯﻭﺟـﻲ(،‬ ‫ﻛﻞ ﻣﺮﺣﻠﺔ ﺗﺘﻄﻠﺐ 2/‪ n‬ﻣﻦ ﻋﻤﻠﻴﺎﺕ ﺍﳌﻘﺎﺭﻧﺔ-ﻭﺍﻻﺳﺘﺒﺪﺍﻝ. ﻭﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺗﺘﻨـﺎﻭﺏ ﺑـﲔ‬ ‫ﻣﺮﺣﻠﺘﲔ ﻭﳘﺎ ﻣﺮﺣﻠﺔ ﺍﻟﻔﺮﺩﻱ ﻭﻣﺮﺣﻠﺔ ﺍﻟﺰﻭﺟﻲ. ﺑﺎﻓﺘﺮﺍﺽ ﺃﻧﻨﺎ ﻧﺮﻳﺪ ﺗﺮﺗﻴﺐ ﺍﻟﺴﻠﺴﻠﺔ > ,2‪a1, a‬‬ ‫‪ .<..., an‬ﻓﺨﻼﻝ ﻣﺮﺣﻠﺔ ﺍﻟﻔﺮﺩﻱ، ﺳﻴﺘﻢ ﻣﻘﺎﺭﻧﺔ ﺍﻟﻌﻨﺎﺻﺮ ﺫﻭﺍﺕ ﺍﻟﺪﻟﻴﻞ ﺍﻟﻔﺮﺩﻱ ﻣﻊ ﻣﺎ ﳚﺎﻭﺭﻫﺎ‬ ‫ﺇﱃ ﺍﻟﻴﻤﲔ، ﻓﺈﺫﺍ ﱂ ﳛﻘﻘﺎ ﺷﺮﻁ ﺍﻟﺘﺮﺗﻴﺐ ﻓﺈﻧﻪ ﻳﺘﻢ ﺇﺑﺪﺍﻝ ﺃﻣﺎﻛﻨﻬﻤﺎ؛ ﻭﺑﺎﻟﺘﺎﱄ، ﻓﺎﻷﺯﻭﺍﺝ ) ,)2‪a1, a‬‬ ‫‪ ((a3, a4), ..., (an-1, an‬ﺗﻘﺎﺭﻥ-ﻭﺗﺴﺘﺒﺪﻝ )ﺑﻔﺮﺽ ﺃﻥ ‪ n‬ﺯﻭﺟﻴﺔ(. ﻭﻋﻠﻰ ﳓﻮ ﻣﺸﺎﺑﻪ ﻓﺈﻧﻪ ﺧـﻼﻝ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫98‬ ‫ﺍﳌﺮﺣﺔ ﺍﻟﺰﻭﺟﻴﺔ، ﺳﻴﺘﻢ ﻣﻘﺎﺭﻧﺔ ﺍﻟﻌﻨﺎﺻﺮ ﺍﻟﱵ ﳍﺎ ﺩﻟﻴﻞ ﺯﻭﺟﻲ ﻣﻊ ﻣﺎ ﳚﺎﻭﺭﻫﺎ ﻧﺎﺣﻴﺔ ﺍﻟﻴﻤﲔ، ﻓﺈﺫﺍ ﱂ‬ ‫ﳛﻘﻘﺎ ﺷﺮﻁ ﺍﻟﺘﺮﺗﻴﺐ ﻓﺈﻧﻪ ﻳﺘﻢ ﺇﺑﺪﺍﻝ ﺃﻣﺎﻛﻨﻬﻤﺎ؛ ﻭﺑﺎﻟﺘﺎﱄ، ﻓﺎﻷﺯﻭﺍﺝ ) ,2-‪a2, a3), (a4, a5), ..., (an‬‬ ‫1-‪ (an‬ﻳﺘﻢ ﻣﻘﺎﺭﻧﺘﻬﺎ-ﻭﺍﺳﺘﺒﺪﺍﳍﺎ. ﻭﺑﻌﺪ ‪ n‬ﻣﺮﺣﻠﺔ ﻓﺈﻥ ﺍﻟﺴﻠﺴﻠﺔ ﺗﻜﻮﻥ ﻗﺪ ﺭﺗﺒﺖ ﺑﺎﻟﻔﻌـﻞ. ﻛـﻞ‬ ‫ﻣﺮﺣﻠﺔ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ )ﻓﺮﺩﻳﺔ ﺃﻭ ﺯﻭﺟﻴﺔ( ﺗﺘﻄﻠﺐ )‪ Θ(n‬ﻋﻤﻠﻴﺔ ﻣﻘﺎﺭﻧﺔ، ﻭﲟﺎ ﺃﻥ ﻟﺪﻳﻨﺎ ﻋﺪﺩ ‪ n‬ﻣﻦ‬ ‫ﺍﳌﺮﺍﺣﻞ ﻓﻠﺬﻟﻚ ﺳﺘﻜﻮﻥ ﺩﺭﺟﺔ ﺍﻟﺘﻌﻘﻴﺪ ﻟﻠﺨﻮﺍﺭﺯﻣﻴـﺔ ﻫـﻲ )2‪ .Θ(n‬ﻳﻮﺿـﺢ ﺍﻟـﺸﻜﻞ)81-3(‬ ‫ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻹﺑﺪﺍﻝ ﺍﻟﺰﻭﺟﻲ-ﺍﻟﻔﺮﺩﻱ ﻣﻦ ﺧﻼﻝ ﻣﺜﺎﻝ.‬ ‫اﻟﺸﻜﻞ)81-3(: ﻓﺮز 8 ﻋﻨﺎﺹﺮ )8=‪ (n‬ﺏﺎﺳﺘﺨﺪام ﺥﻮارزﻣﻴﺔ اﻹﺏﺪال اﻟﺰوﺟﻲ-اﻟﻔﺮدي، ﺥﻼل آﻞ ﻣﺮﺡﻠ ﺔ هﻨ ﺎك 8‬ ‫ﻋﻨﺎﺹﺮ ﻳﺘﻢ ﻣﻘﺎرﻥﺘﻬﺎ)8=‪(n‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫09‬ ‫اﻟﺨﻮارزﻣﻴﺔ )4-3(: اﻟﺨﻮارزﻣﻴﺔ اﻟﺘﺴﻠﺴﻠﻴﺔ ﻟﻺﺏﺪال اﻟﻔﺮدي-اﻟﺰوﺟﻲ.‬ ‫)‪procedure ODD-EVEN(n‬‬ ‫‪begin‬‬ ‫‪for i := 1 to n do‬‬ ‫‪begin‬‬ ‫‪if i is odd then‬‬ ‫‪for j := 0 to n/2 - 1 do‬‬ ‫;)2 + ‪compare-exchange(a2j + 1, a2j‬‬ ‫‪if i is even then‬‬ ‫‪for j := 1 to n/2 - 1 do‬‬ ‫;)1 + ‪compare-exchange(a2j, a2j‬‬ ‫‪end for‬‬ ‫‪end ODD-EVEN‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫.7‬ ‫.8‬ ‫.9‬ ‫.01‬ ‫.11‬ ‫.21‬ ‫اﻟﺼﻴﻐﺔ اﻟﻤﺘﻮازﻳﺔ ﻟﺨﻮارزﻣﻴﺔ اﻹﺏﺪال اﻟﺰوﺟﻲ-اﻟﻔﺮدي:‬ ‫ﺇﻧﻪ ﻣﻦ ﺍﻟﺴﻬﻞ ﺃﻥ ﻧﻘﻮﻡ ﲜﻌﻞ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺑﺎﻹﺑﺪﺍﻝ ﺍﻟﺰﻭﺟﻲ-ﺍﻟﻔﺮﺩﻱ ﻣﺘﻮﺍﺯﻳﺔ، ﻓﺨﻼﻝ‬ ‫ﻛﻞ ﻣﺮﺣﻠﺔ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻳﺘﻢ ﺇﺟﺮﺍﺀ ﻋﻤﻠﻴﺔ ﻣﻘﺎﺭﻧﺔ-ﻭﺍﺳﺘﺒﺪﺍﻝ ﺑﲔ ﻋﺪﺓ ﺃﺯﻭﺍﺝ ﻣﻦ ﺍﻟﻌﻨﺎﺻـﺮ‬ ‫ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ. ﺑﻔﺮﺽ ﺃﻥ ﻟﺪﻳﻨﺎ ﺍﳊﺎﻟﺔ "ﻋﻨﺼﺮ ﻭﺍﺣﺪ ﻟﻜﻞ ﺇﺟﺮﺍﺋﻴﺔ". ﻭﺑﻔﺮﺽ ﺃﻥ ‪ n‬ﻫـﻮ ﻋـﺪﺩ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺎﺕ )ﺃﻳﻀﺎ ‪ n‬ﻫﻮ ﻋﺪﺩ ﺍﻷﻋﺪﺍﺩ ﺍﻟﱵ ﻧﺮﻳﺪ ﻓﺮﺯﻫﺎ(. ﻭﺑﺎﻓﺘﺮﺍﺽ ﺃﻥ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻣﺮﺗﺒـﺔ ﰲ‬ ‫ﻣﺼﻔﻮﻓﺔ ﺃﺣﺎﺩﻳﺔ ﺍﻟﺒﻌﺪ. ﻓﻔﻲ ﺍﻟﺒﺪﺍﻳﺔ ﺳﻴﺴﺘﻘﺮ ﺍﻟﻌﻨﺼﺮ ‪ ai‬ﰲ ﺍﻟﻌﻤﻠﻴﺔ ‪ pi‬ﺣﻴـﺚ ‪. i= 1,2,3,...,n‬‬ ‫ﺧﻼﻝ ﺍﳌﺮﺣﻠﺔ ﺍﻟﻔﺮﺩﻳﺔ ﺳﺘﻘﻮﻡ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﳍﺎ ﺩﻟﻴﻞ ﻓﺮﺩﻱ ﺑﺈﺟﺮﺍﺀ ﻋﻤﻠﻴﺔ ﻣﻘﺎﺭﻧـﺔ-ﻭﺍﺳـﺘﺒﺪﺍﻝ‬ ‫ﻟﻌﻨﺎﺻﺮﻫﺎ ﻣﻊ ﺍﻟﻌﻨﺎﺻﺮ ﺍﳌﺴﺘﻘﺮﺓ ﰲ ﺟﺎﺭﻬﺗﺎ ﺍﻟﻴﻤﲎ. ﻭﻋﻠﻰ ﳓﻮ ﻣﺸﺎﺑﻪ، ﻓﺨﻼﻝ ﺍﳌﺮﺣﻠﺔ ﺍﻟﺰﻭﺟﻴـﺔ‬ ‫ﺳﺘﻘﻮﻡ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﺩﻟﻴﻠﻬﺎ ﺯﻭﺟﻲ ﺑﺈﺟﺮﺍﺀ ﻋﻤﻠﻴﺔ ﻣﻘﺎﺭﻧﺔ-ﻭﺍﺳﺘﺒﺪﺍﻝ ﻟﻌﻨﺎﺻﺮﻫﺎ ﻣـﻊ ﺍﻟﻌﻨﺎﺻـﺮ‬ ‫ﺍﳌﺴﺘﻘﺮﺓ ﰲ ﺟﺎﺭﻬﺗﺎ ﺍﻟﻴﻤﲎ. ﻫﺬﻩ ﺍﻟﺼﻴﻐﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻌﺮﻭﺿﺔ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ )5-3(.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫19‬ ‫اﻟﺨﻮارزﻣﻴﺔ)5-3(:اﻟﺼﻴﻐﺔ اﻟﻤﺘﻮازﻳﺔ ﻟﺨﻮارزﻣﻴﺔ اﻟﻔ ﺮز ﺏﺎﻹﺏ ﺪال اﻟﺰوﺟ ﻲ-اﻟﻔ ﺮدي، ﻋﻠ ﻰ ﻋ ﺪد ‪-n‬ﻋﻤﻠﻴ ﺔ ﺏ ﺸﻜﻞ‬ ‫ﺡﻠﻘﺔ.‬ ‫;)1 +‬ ‫;)1 -‬ ‫;)1 +‬ ‫;)1 -‬ ‫)‪procedure ODD-EVEN_PAR (n‬‬ ‫‪begin‬‬ ‫‪id := process's label‬‬ ‫‪for i := 1 to n do‬‬ ‫‪begin‬‬ ‫‪if i is odd then‬‬ ‫‪if id is odd then‬‬ ‫‪compare-exchange_min(id‬‬ ‫‪else‬‬ ‫‪compare-exchange_max(id‬‬ ‫‪if i is even then‬‬ ‫‪if id is even then‬‬ ‫‪compare-exchange_min(id‬‬ ‫‪else‬‬ ‫‪compare-exchange_max(id‬‬ ‫‪end for‬‬ ‫‪end ODD-EVEN_PAR‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫.7‬ ‫.8‬ ‫.9‬ ‫.01‬ ‫.11‬ ‫.21‬ ‫.31‬ ‫.41‬ ‫.51‬ ‫.61‬ ‫.71‬ ‫ﺧﻼﻝ ﻛﻞ ﻣﺮﺣﻠﺔ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ، ﻓﺎﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﺰﻭﺟﻴﺔ ﺃﻭ ﺍﻟﻔﺮﺩﻳﺔ ﺗﺆﺩﻱ ﺧﻄﻮﺓ ﻣﻘﺎﺭﻧﺔ-‬ ‫ﻭﺍﺳﺘﺒﺪﺍﻝ ﻣﻊ ﺍﳉﺎﺭ ﺍﻷﳝﻦ. ﻭﺫﻟﻚ ﻳﺘﻄﻠﺐ ﻣﻦ ﺍﻟﻮﻗﺖ )1(‪ ،Θ‬ﻭﺇﲨﺎﻻ ﺳﻴﺘﻢ ﺃﺩﺍﺀ ‪ n‬ﻣﺮﺣﻠﺔ ﳑﺎﺛﻠﺔ.‬ ‫ﹰ‬ ‫ﻓﻠﺬﺍ، ﺳﻴﻜﻮﻥ ﻭﻗﺖ ﺍﻟﺘﺸﻐﻴﻞ ﻟﻠﺼﻴﻐﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻫﻮ )‪ .Θ(n‬ﻭﺑﺴﺒﺐ ﺃﻥ ﺩﺭﺟﺔ ﺍﻟﺘﻌﻘﻴـﺪ ﻷﻓـﻀﻞ‬ ‫ﺧﻮﺍﺭﺯﻣﻴﺔ ﻓﺮﺯ ﺗﺴﻠﺴﻠﻴﺔ ﻟﻔﺮﺯ ‪ n‬ﻋﻨﺼﺮ ﻫﻮ )‪ ،Θ(n log n‬ﻓﺈﻥ ﻫﺬﻩ ﺍﻟﺼﻴﻐﺔ ﻣﻦ ﺍﻟﻔﺮﺯ ﺑﺎﻹﺑـﺪﺍﻝ‬ ‫ﺍﻟﺰﻭﺟﻲ-ﺍﻟﻔﺮﺩﻱ ﻟﻴﺴﺖ ﻣﺜﺎﻟﻴﺔ ﺍﻟﻜﻠﻔﺔ، ﺑﺴﺒﺐ ﺃﻥ ﻧﺎﺗﺞ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺸﻐﻴﻞ ﻫﻮ )2‪.Θ(n‬‬ ‫ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﻛﻠﻔﺔ ﻣﺜﺎﻟﻴﺔ ﻟﻠﺼﻴﻐﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻧﺴﺘﺨﺪﻡ ﺇﺟﺮﺍﺋﻴﺎﺕ ﺃﻗﻞ. ﻟﻨﻔﺮﺽ ﺃﻥ ‪ p‬ﻫـﻮ‬ ‫ﻋﺪﺩ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ، ﺣﻴﺚ ‪ .p<n‬ﻭﰲ ﺍﻟﺒﺪﺍﻳﺔ ﳜﺼﺺ ﻟﻜﻞ ﺇﺟﺮﺍﺋﻴﺔ ﻛﺘﻠﺔ ﻣﻜﻮﻧﺔ ﻣﻦ ‪ n/p‬ﻋﻨﺼﺮ،‬ ‫ﻭﺍﻟﱵ ﻳﺘﻢ ﺗﺮﺗﻴﺒﻬﺎ ﺩﺍﺧﻠﻴﺎ )ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﻔﺮﺯ ﺍﻟﺴﺮﻳﻊ ﺃﻭ ﺍﻟﻔﺮﺯ ﺑﺎﻟﺪﻣﺞ(.‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫29‬ ‫2.4.3 ﺥﻮارزﻣﻴﺔ ﺏﺮﻳﻢ ‪ Prim‬ﻹﻳﺠﺎد أﺹﻐﺮ ﺷﺠﺮة هﻴﻜﻠﻴﺔ‬ ‫ﺗﻠﻌﺐ ﻧﻈﺮﻳﺔ ﺍﻟﺒﻴﺎﻥ )‪ (Graph Theory‬ﺩﻭﺭﹰﺍ ﻫﺎﻣﺎ ﰲ ﻋﻠﻢ ﺍﳊﺎﺳﺐ ﺍﻵﱄ ﻷﻬﻧﺎ ﺗﻮﻓﺮ ﻃﺮﻳﻘـﺔ‬ ‫ﹰ‬ ‫ﺳﻬﻠﺔ ﻭﻣﻨﻬﺠﻴﺔ ﻟﻨﻤﺬﺟﺔ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳌﺴﺎﺋﻞ. ﻭﳝﻜﻦ ﺍﻟﺘﻌﺒﲑ ﻋﻦ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳌﺴﺎﺋﻞ ﻣ ﺧـﻼﻝ‬ ‫ﺍﻟﺒﻴﺎﻥ )‪ (Graph‬ﻛﻤﺎ ﳝﻜﻦ ﺣﻠﻬﺎ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺧﻮﺍﺭﺯﻣﻴﺎﺕ ﺑﻴﺎﻧﻴﺔ ﻗﻴﺎﺳﻴﺔ.‬ ‫1.2.4.3 ﺕﻌﺎرﻳﻒ وﻣﻔﺎهﻴﻢ أﺳﺎﺳﻴﺔ‬ ‫ﺇﻥ ﺍﻟﺒﻴﺎﻥ)‪ (Graph‬ﻫﻮ ﺍﻟﺜﻨﺎﺋﻴﺔ )‪،G=(V,E‬ﻭﻳﺘﺄﻟﻒ ﺍﻟﺒﻴﺎﻥ ‪ G‬ﻣﻦ ﳎﻤﻮﻋﺔ ﻣـﻦ ﺍﻟـﺮﺅﻭﺱ ‪،V‬‬ ‫ﻭﳎﻤﻮﻋﺔ ﻣﻦ ﺍﻷﺿﻼﻉ ‪ ،E‬ﲝﻴﺚ ﺃﻥ ﻛﻞ ﺿﻠﻊ ﻳﺼﻞ ﺑﲔ ﺭﺃﺳﲔ ﻣﻦ ﺍﻟﺮﺅﻭﺱ. ﻳﻮﺟﺪ ﻧﻮﻋﺎﻥ ﻣﻦ‬ ‫ِ‬ ‫ﺍﻟﺒﻴﺎﻥ: ﺑﻴﺎﻥ ﻣﻮﺟﻪ )‪ ،(directed graph‬ﻭﺑﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ )‪ ،(undirected graph‬ﰲ ﺍﻟﺒﻴﺎﻥ ﺍﳌﻮﺟﻪ‬ ‫ﻳﻜﻮﻥ ﻟﻜﻞ ﺿﻠﻊ ﺍﲡﺎﻩ ﻭﺍﺣﺪ ﻓﻘﻂ، ﰲ ﺣﲔ ﺃﻥ ﺍﻟﺒﻴﺎﻥ ﺍﻟﻐﲑ ﻣﻮﺟﻪ ﻳﻜﻮﻥ ﻟﻠﻀﻠﻊ ﺍﲡﺎﻫﲔ، ﻓﻤﺜﻼ‬ ‫ﹰ‬ ‫ﻭﻋﻠﻰ ﺍﻓﺘﺮﺍﺽ ﺃﻥ ﺍﻟﺮﺃﺳﲔ ‪ u‬ﻭ ‪ v‬ﻳﻨﺘﻤﻴﺎﻥ ﺇﱃ ﳎﻤﻮﻋﺔ ﺍﻟﺮﺅﻭﺱ ‪ ،V‬ﻭﻛﺎﻥ ﻫﻨﺎﻟﻚ ﺿﻠﻊ ‪ e‬ﻳﺼﻞ‬ ‫ﺑﲔ ﺍﻟﺮﺃﺳﲔ )‪ (u,v‬ﻓﻔﻲ ﺍﻟﺒﻴﺎﻥ ﺍﻟﻐﲑ ﻣﻮﺟﻪ ﻧﻘﻮﻝ ﺃﻥ ﺍﻟﺮﺃﺳﲔ ‪ u‬ﻭ ‪ v‬ﻣﺘﺼﻼﻥ. ﺃﻣـﺎ ﰲ ﺍﻟﺒﻴـﺎﻥ‬ ‫ﺍﳌﻮﺟﻬﺔ ﻓﻨﻘﻮﻝ ﺃﻥ ﻫﻨﺎﻙ ﺍﺗﺼﺎﻝ ﻣﻦ ‪ u‬ﺇﱃ ‪.v‬‬ ‫اﻟﺸﻜﻞ )91-3( )‪ (a‬ﺏﻴﺎن ﻏﻴﺮ ﻣﻮﺟﻪ، )‪ (b‬ﺏﻴﺎن ﻣﻮﺟﻪ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫39‬ ‫ﺇﺫﺍ ﻛﺎﻥ )‪ (u, v‬ﺿﻠﻊ ﰲ ﺑﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ )‪ G = (V, E‬ﻓﺈﻧﻪ ﻳﻘﺎﻝ ﻋﻦ ﺍﻟﺮﺃﺳﲔ ‪ u‬ﻭ ‪ v‬ﺃﻬﻧﻤـﺎ‬ ‫ﳎﺎﻭﺭﺍﻥ ﻟﺒﻌﻀﻬﻤﺎ ﺍﻟﺒﻌﺾ. ﺃﻣﺎ ﰲ ﺣﺎﻝ ﻛﺎﻥ ﺍﻟﻀﻠﻊ ﰲ ﺑﻴﺎﻥ ﻣﻮﺟﻪ ﻓﺈﻧﻨﺎ ﻧﻘﻮﻝ ﺃﻥ ﺍﻟﺮﺃﺱ ‪ v‬ﳎﺎﻭﺭ‬ ‫ﻟﻠﺮﺃﺱ ‪)u‬ﻭﻟﻴﺲ ﺍﻟﻌﻜﺲ ﺻﺤﻴﺤﺎ(.‬ ‫ﺇﻥ ﺍﳌﺴﺎﺭ ﻣﻦ ﺍﻟﺮﺃﺱ ‪ v‬ﺇﱃ ﺍﻟﺮﺃﺱ ‪ u‬ﻫﻮ ﺗﺘﺎﺑﻊ >‪ <v0, v1, v2, ..., vk‬ﻣﻦ ﺍﻟﺮﺅﻭﺱ ﲝﻴﺚ ﺃﻥ‬ ‫‪ v0 = v‬ﻭ ‪ ،vk= u‬ﻭﺃﻥ )1+‪ (vi, vi‬ﺗﻨﺘﻤﻲ ﺇﱃ ‪ E‬ﻣﻦ ﺃﺟﻞ 1 - ‪ .i = 0, 1, ..., k‬ﻭُﻳﻌـﺮﻑ ﻃـﻮﻝ‬ ‫ﱠُ‬ ‫ﺍﳌﺴﺎﺭ ﺑﺄﻧﻪ ﻋﺪﺩ ﺍﻷﺿﻼﻉ ﺍﳌﻮﺟﻮﺩﺓ ﰲ ﺍﳌﺴﺎﺭ)ﺃﻱ ﺍﻟﱵ ﺗﻜﻮﻥ ﺍﳌﺴﺎﺭ(.‬ ‫ﱢ‬ ‫ﻳﻘﺮﻥ ﰲ ﺑﻌﺾ ﺍﻷﺣﻴﺎﻥ ﻭﺯﻥ ﻟﻜﻞ ﺿﻠﻊ ﻣﻦ ‪ .E‬ﻭﺍﻟﻮﺯﻥ ﰲ ﺍﻟﻐﺎﻟﺐ ﻋﺪﺩ ﺣﻘﻴﻘﻲ ﳝﺜﻞ ﻛﻠﻔﺔ‬ ‫ﺃﻭ ﻣﻨﻔﻌﺔ ﺍﻟﻌﺒﻮﺭ ﻟﻠﻀﻠﻊ. ﻭﺍﻟﺒﻴﺎﻥ ﺍﻟﺬﻱ ﻟﻪ ﺃﻭﺯﺍﻥ ﺗﺮﺗﺒﻂ ﻣﻊ ﻛﻞ ﺿﻠﻊ ﻳﺪﻋﻰ ﺑﺄﻧﻪ ﺑﻴﺎﻥ ﻣـﻮﺯﻭﻥ،‬ ‫ﻭﳝﻜﻦ ﺃﻥ ﻳﺸﺎﺭ ﺇﻟﻴﻪ )‪ ،G = (V, E, w‬ﺣﻴﺚ ‪ V‬ﻫﻲ ﺍﻟﺮﺅﻭﺱ ﻭ ‪ E‬ﻫﻲ ﺍﻷﺿﻼﻉ ﻛﻤﺎ ﺃﺷﺮﻧﺎ ﻗﺒﻞ‬ ‫ﻗﻠﻴﻞ، ﺃﻣﺎ ‪ w:E→R‬ﻓﻬﻲ ﺗﺎﺑﻊ ﺣﻘﻴﻘﻲ ﻣﻌﺮﻑ ﻋﻠﻰ ‪ .E‬ﻭﳝﻜﻦ ﺗﻌﺮﻳﻒ ﻭﺯﻥ ﺍﻟﺒﻴﺎﻥ ﻋﻠـﻰ ﺃﻧـﻪ‬ ‫ﳎﻤﻮﻉ ﺃﻭﺯﺍﻥ ﺃﺿﻼﻋﻪ. ﺃﻣﺎ ﻭﺯﻥ ﺍﳌﺴﺎﺭ ﻓﻬﻮ ﳎﻤﻮﻉ ﺃﻭﺯﺍﻥ ﺍﻷﺿﻼﻉ ﺍﳌﻜﻮﻧﺔ ﻟﻪ.‬ ‫ﻫﻨﺎﻙ ﻃﺮﻳﻘﺘﺎﻥ ﻗﻴﺎﺳﻴﺘﺎﻥ ﻟﺘﻤﺜﻴﻞ ﺍﳌﺨﻄﻄﺎﺕ ﺍﻟﺒﻴﺎﻧﻴﺔ ﰲ ﺑﺮﺍﻣﺞ ﺍﳊﺎﺳـﺐ. ﺍﻷﻭﱃ ﺑﺎﺳـﺘﺨﺪﺍﻡ‬ ‫ﺍﳌﺼﻔﻮﻓﺎﺕ ‪ ،Matrix‬ﻭﺍﻟﺜﺎﻧﻴﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﻘﻮﺍﺋﻢ ﺍﳌﺘﺼﻠﺔ ‪ .Linked List‬ﻭﻷﻧﻨﺎ ﻟـﻦ ﻧـﺴﺘﺨﺪﻡ‬ ‫ﻃﺮﻳﻘﺔ ﺍﻟﻘﻮﺍﺋﻢ ﺍﳌﺘﺼﻠﺔ ﻓﻠﻦ ﻧﺘﻄﺮﻕ ﺇﻟﻴﻬﺎ.‬ ‫ﻟﻴﻜﻦ ﻟﺪﻳﻨﺎ ﺍﻟﺒﻴﺎﻥ )‪ G = (V, E‬ﻭﻓﻴﻪ ‪ n‬ﺭﺃﺱ ﻣﺮﻗﻤﺔ ﻣﻦ 1 ﻭﺣـﱴ ‪ .n‬ﺇﻥ ﻣـﺼﻔﻮﻓﺔ ﺍﳉـﻮﺍﺭ‬ ‫‪ adjacency matrix‬ﳍﺬﺍ ﺍﻟﺒﻴﺎﻥ ﻫﻲ ﺍﳌﺼﻔﻮﻓﺔ )‪ A=(ai,j‬ﻭﳍﺎ ﺍﳊﺠﻢ ‪ ،n × n‬ﻭﻣﻌﺮﻓﺔ ﻛﺎﻟﺘﺎﱄ:‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫49‬ ‫ﰲ ﺍﻟﺸﻜﻞ)02-3( ﺗﻮﺿﻴﺢ ﻟﺒﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ ﳑﺜﻞ ﲟﺼﻔﻮﻓﺔ ﺟﻮﺍﺭ. ﻭﻳﻼﺣﻆ ﺃﻥ ﻣﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ‬ ‫ﻟﻠﺒﻴﺎﻥ ﺍﻟﻐﲑ ﻣﻮﺟﻪ ﻫﻲ ﻣﺼﻔﻮﻓﺔ ﻣﺘﻨﺎﻇﺮﺓ. ﻭﳝﻜﻦ ﺃﻥ ﻳﻌﺪﻝ ﺍﻟﺘﻤﺜﻴﻞ ﰲ ﻣﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ ﻟﺘﺘﻤﺎﺷﻰ‬ ‫ّ‬ ‫ﻣﻊ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﻮﺯﻭﻧﺔ)‪ .(weighted graphs‬ﻭﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﳝﻜﻦ ﺗﻌﺮﻳﻒ )‪ A = (ai,j‬ﻋﻠﻰ ﺍﻟﻨﺤﻮ‬ ‫ﺍﻵﰐ:‬ ‫اﻟﺸﻜﻞ)02-3(: ﺏﻴﺎن ﻏﻴﺮ ﻣﻮﺟﻪ وﺕﻤﺜﻴﻠﻪ ﺏﻤﺼﻔﻮﻓﺔ اﻟﺠﻮار.‬ ‫ﺳﻨﺸﲑ ﳌﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ ﺍﳌﻌﺪﻟﺔ ﲟﺼﻔﻮﻓﺔ ﺟﻮﺍﺭ ﻣﻮﺯﻭﻧﺔ. ﻭﺍﳌﺴﺎﺣﺔ ﺍﳌﻄﻠﻮﺑﺔ ﻟﺘﺨﺰﻳﻦ ﻣـﺼﻔﻮﻓﺔ‬ ‫ﺍﳉﻮﺍﺭ ﻟﺒﻴﺎﻥ ﺑﻌﺪﺩ ‪ n‬ﺭﺃﺱ ﻫﻲ )2‪.Θ(n‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫59‬ ‫2.2.4.3 ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ‬ ‫ﺍﻷﺻﻐﺮ: )ﺧﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ(‬ ‫‪Minimum Spanning Tree (MST): Prim's Algorithm‬‬ ‫ﺇﻥ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﻟﺒﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ ‪ G‬ﻫﻲ ﺑﻴﺎﻥ ﺟﺰﺋﻲ ﻣﻦ ‪ُ G‬ﻜﻮﻥ ﺷﺠﺮﺓ ﲢﺘﻮﻱ ﻋﻠﻰ‬ ‫ﻳﱢ‬ ‫ﲨﻴﻊ ﺭﺅﻭﺱ ‪ .G‬ﻭﰲ ﺍﻟﺒﻴﺎﻥ ﺍﳌﻮﺯﻭﻥ، ﻳﻜﻮﻥ ﺍﻟﻮﺯﻥ ﻟﻠﺒﻴﺎﻥ ﺍﳉﺰﺋﻲ ﻫﻮ ﳎﻤﻮﻉ ﺃﻭﺯﺍﻥ ﺍﻷﺿـﻼﻉ‬ ‫ﻓﻴﻪ، ﻭﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮ )‪ (MST‬ﻟﺒﻴﺎﻥ ﻣﻮﺯﻭﻥ ﻏﲑ ﻣﻮﺟﻪ ﻫﻲ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﳍﺎ ﺍﻟﻮﺯﻥ‬ ‫ﺍﻷﺻﻐﺮ. ﻭﺗﺘﻄﻠﺐ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳌﺴﺎﺋﻞ ﺇﳚﺎﺩ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮ ﻟﺒﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ، ﻓﻌﻠـﻰ‬ ‫ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﻗﺪ ﻳﻜﻮﻥ ﻣﻦ ﺍﻟﻀﺮﻭﺭﻱ ﺇﳚﺎﺩ ﺍﻟﻄﻮﻝ ﺍﻷﻗﺼﺮ ﻟﻠﻜﻴﺎﺑﻞ ﺍﻟﱵ ﺗﺮﺑﻂ ﳎﻤﻮﻋﺔ ﺣﺎﺳﺒﺎﺕ‬ ‫ﰲ ﺷﺒﻜﺔ، ﺃﻭ ﻣﺜﻼ ﲢﺪﻳﺪ ﺃﻗﻞ ﻛﻠﻔﺔ ﻟﺮﺑﻂ ﺧﻄﻮﻁ ﺍﻻﺗﺼﺎﻻﺕ ﺑﲔ ﺍﳌﺪﻥ، ﻭﺇﳚﺎﺩ ﺫﻟﻚ ﳝﻜﻦ ﺃﻥ‬ ‫ﹰ‬ ‫ﻳﺘﻢ ﻋﻦ ﻃﺮﻳﻖ ﺍﻟﺒﺤﺚ ﻋﻦ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮ ﻟﻠﺒﻴﺎﻥ ﺍﻟﻐﲑ ﻣﻮﺟﻪ ﺍﻟﺬﻱ ﳛﺘﻮﻱ ﻋﻠﻰ ﻛـﻞ‬ ‫ﺍﻻﺭﺗﺒﺎﻃﺎﺕ. ﰲ ﺍﻟﺸﻜﻞ)12-3( ﻋﺮﺽ ﻷﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﻟﺒﻴﺎﻥ ﻏﲑ ﻣﻮﺟﻪ.‬ ‫اﻟﺸﻜﻞ)12-3(: ﺏﻴﺎن ﻏﻴﺮ ﻣﻮﺟﻪ، واﻟﺸﺠﺮة اﻟﻬﻴﻜﻠﻴﺔ اﻷﺹﻐﺮﻳﺔ ﻓﻴﻪ.‬ ‫ﺇﺫﺍ ﱂ ﻳﻜﻦ ‪ G‬ﻣﺘﺼﻼ، ﻓﺈﻧﻪ ﻻ ﳝﻜﻦ ﺃﻥ ﻳﻜﻮﻥ ﻟﻪ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ. ﺑﻞ ﺳﻴﻜﻮﻥ ﻟﺪﻳﻪ ﻏﺎﺑﺔ ﻫﻴﻜﻠﻴﺔ‬ ‫ﹰ‬ ‫‪ ،spanning forest‬ﻭﻣﻦ ﺍﺟﻞ ﺗﺒﺴﻴﻂ ﻓﻜﺮﺓ ﺣﺴﺎﺏ ﺃﺼﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﻧﻔﺮﺽ ﺃﻥ ‪ G‬ﻣﺘﺼﻞ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫69‬ ‫ﺗﻌﺘﱪ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ ﻹﳚﺎﺩ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮ ﻣﻦ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﳉﺸﻌﺔ )‬ ‫‪ .(algorithm‬ﻓﻬﻲ ﺗﺒﺪﺃ ﺑﺎﺧﺘﻴﺎﺭ ﺃﺣﺪ ﺍﻟﺮﺅﻭﺱ ﻋﺸﻮﺍﺋﻴﺎ ﻛﻌﻨﺼﺮ ﺑﺪﺍﻳﺔ، ﰒ ﺑﻌـﺪ ﺫﻟـﻚ ﺗﻘـﻮﻡ‬ ‫ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺑﺘﻨﻤﻴﺔ ﺃﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﻋﻦ ﻃﺮﻳﻖ ﺍﺧﺘﻴﺎﺭ ﺭﺃﺱ ﻭﺿﻠﻊ ﺟﺪﻳﺪﻳﻦ ﻋﻠﻰ ﺃﻥ ﻳﻜﻮﻥ‬ ‫ﰲ ﺍﺧﺘﻴﺎﺭﳘﺎ ﺿﻤﺎﻥ ﻛﻮﻬﻧﻤﺎ ﳛﻘﻘﺎﻥ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﺑﺄﻗﻞ ﻛﻠﻔﺔ. ﰒ ﺗﺴﺘﻤﺮ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﰲ ﺍﻟﻌﻤﻞ‬ ‫ﺇﱃ ﺃﻥ ﺗﻜﻮﻥ ﲨﻴﻊ ﺍﻟﺮﺅﻭﺱ ﻗﺪ ﰎ ﺍﺧﺘﻴﺎﺭﻫﺎ.‬ ‫‪greedy‬‬ ‫ﺑﻔﺮﺽ ﺃﻥ )‪ G=(V,E,w‬ﺑﻴﺎﻥ ﻣﻮﺯﻭﻥ ﻏﲑ ﻣﻮﺟﻪ ﻧﺮﻳﺪ ﺇﳚﺎﺩ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮ ﻓﻴـﻪ،‬ ‫ﻭﺑﻔﺮﺽ ﺃﻥ )‪ A=(ai,j‬ﻫﻲ ﻣﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ ﺍﳌﻮﺯﻭﻧﺔ ﻟﻠﺒﻴﺎﻥ ‪ .G‬ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴـﺔ)6-3( ﻋـﺮﺽ‬ ‫ﳋﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ. ﺗﺴﺘﺨﺪﻡ ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﺠﻤﻟﻤﻮﻋﺔ ‪ VT‬ﻟﻼﺣﺘﻔﺎﻅ ﺑﺮﺅﻭﺱ ﺃﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ‬ ‫ﺃﺛﻨﺎﺀ ﺗﻜﻮﻳﻨﻬﺎ. ﺃﻳﻀﺎ ﺗَﺴﺘﺨﺪﻡ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﺍﳌﺼﻔﻮﻓﺔ ]‪ d[1..n‬ﻟﻼﺣﺘﻔﺎﻅ ﺑﺄﻭﺯﺍﻥ ﺃﺿﻼﻉ ﺍﻟﺸﺠﺮﺓ،‬ ‫ِ‬ ‫ﻓﻠﻜﻞ ﺭﺃﺱ ﳛﻘﻖ ﺍﻟﺸﺮﻁ ﺑﺄﻥ ‪ v‬ﻳﻨﺘﻤﻲ ﺇﱃ )‪ (V- VT‬ﻓﺈﻥ ]‪ d [v‬ﲢﺘﻔﻆ ﺑﻮﺯﻥ ﺍﻟﻀﻠﻊ ﺍﻟﺬﻱ ﻟـﻪ‬ ‫ﺍﻟﻮﺯﻥ ﺍﻷﻗﻞ ﻣﻦ ﺃﻱ ﺿﻠﻊ ﻳﺮﺑﻂ ﺑﲔ ﺃﻱ ﺭﺃﺱ ﰲ ‪ VT‬ﺇﱃ ﺍﻟﺮﺃﺱ ‪.v‬‬ ‫ﻣﺒﺪﺋﻴﺎ ﲢﺘﻮﻱ ﺍﺠﻤﻟﻤﻮﻋﺔ ‪ VT‬ﻋﻠﻰ ﺭﺃﺱ ‪ r‬ﻋﺸﻮﺍﺋﻲ ﻟﻴﻜﻮﻥ ﻫﻮ ﺍﳉﺬﺭ ﻷﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ.‬ ‫ﺃﻳﻀﺎ ﻓﺈﻥ 0 = ]‪ ،d[r‬ﻭﻟﻜﻞ ‪ v‬ﻣﻦ )‪ v (V - VT), d[v] = w(r, v‬ﺇﺫﺍ ﻛﺎﻥ ﻳﻮﺟﺪ ﺿـﻠﻊ؛ ﻭﺇﻻ‬ ‫ﺳﺘﻜﻮﻥ ∞= ]‪.d[v‬‬ ‫ﺧﻼﻝ ﻛﻞ ﺗﻜﺮﺍﺭ ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ، ﻳﺘﻢ ﺇﺿﺎﻓﺔ ﺭﺃﺱ ﺟﺪﻳﺪ ‪ u‬ﺇﱃ ﺍﺠﻤﻟﻤﻮﻋﺔ ‪ VT‬ﲝﻴـﺚ‬ ‫})‪ .d[u] = min{d [v] | v (V - VT‬ﻭﺑﻌﺪ ﺃﻥ ﻳﺘﻢ ﺇﺿﺎﻓﺔ ﻫﺬﺍ ﺍﻟﺮﺃﺱ ﻓﺠﻤﻴﻊ ﻗـﻴﻢ ]‪ d[v‬ﺍﻟـﱵ‬ ‫ﲢﻘﻖ )‪ v ∈ (V - VT‬ﻳﺘﻢ ﲢﺪﻳﺜﻬﺎ ﻷﻧﻪ ﺭﲟﺎ ﻳﻜﻮﻥ ﻫﻨﺎﻟﻚ ﺍﻵﻥ ﺿﻠﻊ ﻟﻪ ﻭﺯﻥ ﺃﻗﻞ ﻳـﺼﻞ ﺑـﲔ‬ ‫ﺍﻟﺮﺃﺱ ‪ v‬ﻭﺍﻟﺮﺃﺱ ﺍﳌﻀﺎﻑ ‪ .u‬ﺗﺘﻮﻗﻒ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ ﻋﻦ ﺍﻟﻌﻤﻞ ﻋﻨﺪﻣﺎ ﻳﻜﻮﻥ ‪.VT =V‬‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ)22-3( ﻫﺬﻩ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ، ﺣﻴﻨﻤﺎ ﺗﺘﻮﻗﻒ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ، ﻓﺈﻥ ﺍﻟﻜﻠﻔـﺔ ﻷﺻـﻐﺮ‬ ‫، ﻭﺑﺴﻬﻮﻟﺔ ﳝﻜﻦ ﺍﻟﺘﻌﺪﻳﻞ ﻋﻠﻰ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ)6-3( ﻟﻜـﻲ ﲣـﺰﻥ‬ ‫ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ ﻫﻲ‬ ‫ﺍﻷﺿﻼﻉ ﺍﻟﱵ ﺗﺘﻮﺿﻊ ﻋﻠﻰ ﺍﻟﺸﺠﺮﺓ ﺍﳍﻴﻜﻠﻴﺔ ﺍﻷﺻﻐﺮﻳﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫79‬ ‫اﻟﺸﻜﻞ)22-3(: ﺥﻮارزﻣﻴﺔ ﺏﺮﻳﻢ ﻷﺹﻐﺮ ﺷﺠﺮة هﻴﻜﻠﻴﺔ. اﻟﺠﺬر ﻷﺹﻐﺮ ﺷﺠﺮة هﻴﻜﻠﻴﺔ هﻮ ‪ .b‬وﻣﻦ أﺟﻞ آ ﻞ ﺕﻜ ﺮار‬ ‫ﻓﻴﻮﺿﺢ ﺏﺎﻟﺨﻂ اﻟﻐﺎﻣﻖ اﻟﺮؤوس اﻟﺘﻲ ﻓﻲ ‪ VT‬وآﺬﻟﻚ اﻷﺿﻼع اﻟﻤﺨﺘﺎرة. اﻟﻤﺼﻔﻮﻓﺔ ]‪ d[v‬ﺕﻌ ﺮض ﻗ ﻴﻢ اﻟ ﺮؤوس‬ ‫ﻓﻲ ‪ V- VT‬ﺏﻌﺪ أن ﻳﺘﻢ ﺕﺤﺪﻳﺜﻬﺎ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫89‬ ‫ﰲ ﺍﳋﻮﺍﺭﺯﻣﻴﺔ )6-3( ﺟﺴﻢ ﺍﳊﻠﻘﺔ ‪) while‬ﺍﻷﺳﻄﺮ 31-01( ﻳﻨﻔﺬ 1-‪ n‬ﻣﺮﺓ. ﻭﻋﻤﻠﻴﺔ ﺣـﺴﺎﺏ‬ ‫})‪) min{d[v]|v ∈ (V - VT‬ﺳﻄﺮ 01( ﻭﺣﻠﻘﺔ ‪)for‬ﺃﺳﻄﺮ 31-21( ﺗﻨﻔﺬ ﰲ )‪ O(n‬ﺧﻄﻮﺓ. ﻭﻟﺬﺍ‬ ‫ﻓﺈﻥ ﺩﺭﺟﺔ ﺍﻟﺘﻌﻘﻴﺪ ﺍﻟﻌﺎﻣﺔ ﳋﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ ﻫﻮ )2‪.Θ(n‬‬ ‫اﻟﺨﻮارزﻣﻴﺔ)6-3(: ﺥﻮارزﻣﻴﺔ ﺏﺮﻳﻢ اﻟﺘﺴﻠﺴﻠﻴﺔ ﻷﺹﻐﺮ ﺷﺠﺮة هﻴﻜﻠﻴﺔ.‬ ‫;}) ‪(V - VT‬‬ ‫)‪procedure PRIM_MST(V, E, w, r‬‬ ‫‪begin‬‬ ‫;}‪VT := {r‬‬ ‫;0 =: ]‪d[r‬‬ ‫‪for all v‬‬ ‫‪(V - VT ) do‬‬ ‫;)‪if edge (r, v) exists set d[v] := w(r, v‬‬ ‫=: ]‪else set d[v‬‬ ‫;‬ ‫‪V do‬‬ ‫‪while VT‬‬ ‫‪begin‬‬ ‫‪find a vertex u such that d[u]:=min{d[v]|v‬‬ ‫‪VT := VT‬‬ ‫;}‪{u‬‬ ‫‪for all v‬‬ ‫‪(V - VT ) do‬‬ ‫;})‪d[v] := min{d[v], w(u, v‬‬ ‫‪endwhile‬‬ ‫‪end PRIM_MST‬‬ ‫.1‬ ‫.2‬ ‫.3‬ ‫.4‬ ‫.5‬ ‫.6‬ ‫.7‬ ‫.8‬ ‫.9‬ ‫.01‬ ‫.11‬ ‫.21‬ ‫.31‬ ‫.41‬ ‫.51‬ ‫• اﻟﺼﻴﻐﺔ اﻟﻤﺘﻮازﻳﺔ ﻟﺨﻮارزﻣﻴﺔ ﺏﺮﻳﻢ:‬ ‫ﺗﻌﺘﱪ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺑﺮﱘ ﺗﻜﺮﺍﺭﻳﺔ، ﻭﻛﻞ ﺗﻜﺮﺍﺭ ﻳﻀﻴﻒ ﺭﺃﺱ ﺟﺪﻳﺪ ﺇﱃ ﺃﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ.‬ ‫ﻭﻷﻥ ﻗﻴﻤﺔ ]‪ d[v‬ﻟﻠﺮﺃﺱ ‪ v‬ﻗﺪ ﺗﺘﻐﲑ ﻛﻞ ﻣﺮﺓ ﻳﺘﻢ ﻓﻴﻬﺎ ﺇﺿﺎﻓﺔ ﺭﺃﺱ ﺇﱃ ‪ ،VT‬ﻓﺈﻧﻪ ﻣـﻦ ﺍﻟـﺼﻌﺐ‬ ‫ﺍﺧﺘﻴﺎﺭ ﺃﻛﺜﺮ ﻣﻦ ﺭﺃﺱ ﺳﻮﻳﺎ ﻹﺿﺎﻓﺘﻬﺎ ﺇﱃ ﺃﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ. ﻋﻠﻰ ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ، ﰲ ﺍﻟـﺸﻜﻞ‬ ‫)32-3(، ﺑﻌﺪ ﺃﻥ ﻳﺘﻢ ﺍﺧﺘﻴﺎﺭ ﺍﻟﺮﺃﺱ ‪ ،b‬ﺇﺫﺍ ﰎ ﺍﺧﺘﻴﺎﺭ ﺍﻟﺮﺃﺳﲔ ‪ d‬ﻭ ‪ c‬ﺳﻮﻳﺎ ﻓﺈﻧﻪ ﻻ ﳝﻜﻦ ﺇﳚـﺎﺩ‬ ‫ﺃﺻﻐﺮ ﺷﺠﺮﺓ ﻫﻴﻜﻠﻴﺔ. ﺑﺴﺒﺐ ﺃﻧﻪ ﺑﻌﺪ ﺍﺧﺘﻴﺎﺭ ﺍﻟﺮﺃﺱ ‪ d‬ﻓﻘﻴﻤﺔ ]‪ d[c‬ﻳﺘﻢ ﲢﺪﻳﺜﻬﺎ ﻣـﻦ ٥ ﺇﱃ ٢.‬ ‫ﻭﻋﻠﻰ ﺫﻟﻚ ﻓﺈﻧﻪ ﻟﻴﺲ ﻣﻦ ﺍﻟﺴﻬﻞ ﺗﻨﻔﻴﺬ ﺗﻜﺮﺍﺭﺍﺕ ﳐﺘﻠﻔﺔ ﻟﻌﺒﺎﺭﺓ ‪ While‬ﺑﺎﻟﺘﻮﺍﺯﻱ. ﻭﻣﻊ ﺫﻟـﻚ،‬ ‫ﻓﺈﻧﻪ ﳝﻜﻦ ﺗﻨﻔﻴﺬ ﻛﻞ ﺗﻜﺮﺍﺭ ﺑﺎﻟﺘﻮﺍﺯﻱ ﻛﻤﺎ ﻳﻠﻲ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫99‬ ‫ﺑﻔﺮﺽ ﺃﻥ ‪ p‬ﻫﻮ ﻋﺪﺩ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ، ﻭ ‪ n‬ﻫﻲ ﻋﺪﺩ ﺍﻟﺮﺅﻭﺱ ﰲ ﺍﻟﺒﻴﺎﻥ. ﻭﰎ ﺗﻘﺴﻴﻢ ﺍﺠﻤﻟﻤﻮﻋﺔ‬ ‫‪ V‬ﺇﱃ ‪ p‬ﻣﻦ ﺍﺠﻤﻟﻤﻮﻋﺎﺕ ﺍﳉﺰﺋﻴﺔ. ﻛﻞ ﳎﻤﻮﻋﺔ ﻓﺮﻋﻴﺔ ﳍﺎ ‪ n/p‬ﺭﺃﺱ ﻣﺘﻌﺎﻗﺐ، ﻭﺍﻟﻌﻤﻞ ﺍﳌﺮﺍﻓﻖ ﻟﻜﻞ‬ ‫ﳎﻤﻮﻋﺔ ﻓﺮﻋﻴﺔ ﻳﺴﻨﺪ ﺇﱃ ﺇﺟﺮﺍﺋﻴﺎﺕ ﳐﺘﻠﻔﺔ. ﺑﻔﺮﺽ ﺃﻥ ‪ Vi‬ﻫﻲ ﳎﻤﻮﻋﺔ ﻓﺮﻋﻴﺔ ﻣـﻦ ﺍﻟـﺮﺅﻭﺱ‬ ‫ﻣﺴﻨﺪﺓ ﺇﱃ ﺍﻹﺟﺮﺍﺋﻴﺔ ‪ Pi‬ﻟﻜﻞ 1-‪ .i=0,1,...,p‬ﻭﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ‪ Pi‬ﲣﺰﻥ ﺍﻟﻘﺴﻢ ﻣـﻦ ﺍﳌـﺼﻔﻮﻓﺔ ‪d‬‬ ‫ﺍﻟﺬﻱ ﻳﻄـﺎﺑﻖ ﻝ ‪) Vi‬ﲟﻌﲎ ﺃﻥ ﺍﻹﺟـﺮﺍﺋﻴﺔ ‪ Pi‬ﲣﺰﻥ ]‪ d[v‬ﲝﻴﺚ ﺃﻥ ‪ .( v∈Vi‬ﺍﻟﺸﻜﻞ )‪(3-24.a‬‬ ‫ﻳﻮﺿﺢ ﻫﺬﺍ ﺍﻟﺘﻘﺴﻴﻢ. ﻓﻜﻞ ﺇﺟﺮﺍﺋﻴﺔ ‪ Pi‬ﲢﺴﺐ ﺃﺛﻨـﺎﺀ ﻛـﻞ ﺗﻜـﺮﺍﺭ ﳊﻠﻘـﺔ ‪ while‬ﺍﻟﺘـﺎﱄ‬ ‫}‪.di[u]=min{di[v]|v ∈ (V - VT) ∩ Vi‬‬ ‫ﻭﻋﻨﺪ ﺍﳊﺼﻮﻝ ﻋﻠﻰ ﺍﻟﻘﻴﻤﺔ ﺍﻷﺻﻐﺮ ﻣﻦ ﲨﻴﻊ ]‪ ،di[u‬ﻭﲣﺰﻥ ﰲ ﺍﻹﺟﺮﺍﺋﻴﺔ 0‪ .P‬ﻭﺍﻹﺟﺮﺍﺋﻴﺔ‬ ‫ﲢﺘﻔﻆ ﺍﻵﻥ ﺑﺎﻟﺮﺃﺱ ﺍﳉﺪﻳﺪ ‪ u‬ﻭﺍﻟﺬﻱ ﺳﻴﻀﺎﻑ ﺇﱃ ‪ .VT‬ﻭﺗﻘﻮﻡ ﺍﻹﺟﺮﺍﺋﻴﺔ 0‪ P‬ﺑﺈﺫﺍﻋﺔ ‪ u‬ﺇﱃ ﲨﻴﻊ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺎﺕ. ﻛﻤﺎ ﺃﻥ ﺍﻹﺟﺮﺍﺋﻴﺔ ‪ Pi‬ﺍﳌﺴﺌﻮﻟﺔ ﻋﻦ ﺍﻟﺮﺃﺱ ‪ u‬ﲢﺪﺩ ﺃﻥ ‪ u‬ﻳﻨﺘﻤﻲ ﺇﱃ ﺍﺠﻤﻟﻤﻮﻋـﺔ ‪،VT‬‬ ‫ﻭﰲ ﺍﻟﻨﻬﺎﻳﺔ، ﺗﻘﻮﻡ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﺑﺘﺤﺪﻳﺚ ﻗﻴﻢ ]‪ d[v‬ﻟﻠﺮﺅﻭﺱ ﺍﳋﺎﺻﺔ ﻬﺑﺎ)ﺍﶈﻠﻴﺔ(.‬ ‫0‪P‬‬ ‫اﻟﺸﻜﻞ)42-3(: ﺕﻘﺴﻴﻢ اﻟﻤﺼﻔﻮﻓﺔ ‪ d‬وﻣﺼﻔﻮﻓﺔ اﻟﺠﻮار ‪ A‬إﻟﻰ ‪ P‬إﺟﺮاﺋﻴﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫001‬ ‫ﻋﻨﺪ ﺇﺿﺎﻓﺔ ﺭﺃﺱ ﺟﺪﻳﺪ ‪ u‬ﺇﱃ ‪ ،VT‬ﻓﺈﻥ ﻗﻴﻢ ]‪ d[v‬ﺍﻟﱵ ﲢﻘﻖ ‪ v‬ﻳﻨﺘﻤﻲ ﺇﱃ )‪ (V- VT‬ﳚﺐ ﺃﻥ‬ ‫ﻳﺘﻢ ﲢﺪﻳﺜﻬﺎ. ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﳌﻄﺎﺑﻘﺔ ﻝ‪ v‬ﳚﺐ ﺃﻥ ﺗﻌﻠﻢ ﻭﺯﻥ ﺍﻟﻀﻠﻊ )‪ .(u,v‬ﻭﺑﺎﻟﺘﺎﱄ ﺳﺘﺤﺘﺎﺝ ﻛـﻞ‬ ‫ﺇﺟﺮﺍﺋﻴﺔ ‪ Pi‬ﺇﱃ ﲣﺰﻳﻦ ﺍﻟﻌﻤﻮﺩ ﳌﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ ﺍﳌﻮﺯﻭﻧﺔ ﺍﳌﻄﺎﺑﻖ ﻟﻠﻤﺠﻤﻮﻋﺔ ‪ Vi‬ﻟﻠﺮﺅﻭﺱ ﺍﳌـﺴﻨﺪﺓ‬ ‫ﺇﻟﻴﻬﺎ. ﺍﳌﺴﺎﺣﺔ ﺍﳌﻄﻠﻮﺑﺔ ﻟﺘﺨﺰﻳﻦ ﺍﻷﺟﺰﺍﺀ ﺍﻟﻼﺯﻣﺔ ﻣﻦ ﻣـﺼﻔﻮﻓﺔ ﺍﳉـﻮﺍﺭ ﻟﻜﻞ ﺇﺟــﺮﺍﺋﻴﺔ ﻫﻲ‬ ‫)‪ .Θ (n2/p‬ﺍﻟﺸﻜﻞ )‪ (3-24.b‬ﻳﻮﺿﺢ ﺍﻟﺘﻘﺴﻴﻢ ﳌﺼﻔﻮﻓﺔ ﺍﳉﻮﺍﺭ ﺍﳌﻮﺯﻭﻧﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫101‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺮﺍﺑﻊ:ﺍﻟﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ‬ ‫ﻛﻤﺎ ﺭﺃﻳﻨﺎ ﰲ ﻓﺼﻞ ﺳﺎﺑﻖ ﺃﻥ ﻫﻨﺎﻙ ﺗﻘﻨﻴﺎﺕ ﳝﻜﻦ ﺇﺗﺒﺎﻋﻬﺎ ﳉﻌﻞ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺃﻭ ﻣﺴﺄﻟﺔ ﻣﻌﻴﻨﺔ‬ ‫ٍ‬ ‫ﻣﺘﻮﺍﺯﻳﺔ، ﻭﻟﻜﻦ ﻳﺒﻘﻰ ﻟﺪﻳﻨﺎ ﺍﻵﻥ ﺍﻟﺴﺆﺍﻝ ﺍﻟﺘﺎﱄ: "ﻛﻴﻒ ﻧﻘﻮﻡ ﺑﱪﳎﺔ ﻫﺬﻩ ﺍﳌﺴﺄﻟﺔ ﺍﺠﻤﻟـﺰﺃﺓ ﻋﻠـﻰ‬ ‫ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ؟"، ﻭﺍﻹﺟﺎﺑﺔ: ﳝﻜﻨﻨﺎ ﺫﻟﻚ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻧﻮﻉ ﻣﻦ ﺍﻟﱪﳎﺔ ﻳﻄﻠﻖ ﻋﻠﻴـﻪ "ﺍﻟﱪﳎـﺔ‬ ‫ﺍﳌﺘﻮﺍﺯﻳﺔ". ﺇﻥ ﺍﻟﺘﻌﺒﲑ ﻋﻦ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﺑﺮﺍﻣﺞ ﺍﳌﺴﺘﺨﺪﻡ ﻳﺘﻄﻠﺐ ﺗﻌـﺒﲑﹰﺍ ﰲ ﺍﻟﻠﻐـﺎﺕ ﺍﻟﱪﳎﻴـﺔ‬ ‫ﻭﺷﻜﻠﻬﺎ، ﻓﻬﻲ ﺗﺘﻄﻠﺐ ﻣﺜﻼ ﺑﻌﺾ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺍﻷﻭﻟﻴﺔ ﻟﻠﺘﻌﺒﲑ ﻋﻦ ﺍﻟﺘﻮﺍﺯﻱ ﺑﲔ ﻣﻬﻤﺘﲔ، ﻭﺃﺧﺮﻯ‬ ‫ﹰ‬ ‫ﻣﻦ ﺃﺟﻞ ﺍﻟﺘﺨﺎﻃﺐ ﻭﺍﻟﺘﺰﺍﻣﻦ ﻭﻣﺎ ﺷﺎﺑﻪ ﺫﻟﻚ.‬ ‫ﺍﻟﱪﳎــﺔ ﺍﳌﺘﻮﺍﻳــﺔ )‪ :(Parallel Programming‬ه ﻲ اﻟﺒﺮﻣﺠ ﺔ ﺏﻠﻐ ﺔ ﺕﺘ ﻀﻤﻦ اﻟﺒﻨ ﻰ أو اﻟﻤﻴ ﺰات‬ ‫اﻟﻤﺘﻮازیﺔ.‬ ‫ﻭﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺑﻨﺎﺀ ﺍﳌﻤﻴﺰﺍﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻣﻦ ﺧﻼﻝ ﺑﻌﺾ ﺍﻟﻠﻐﺎﺕ ﺍﻟﱪﳎﻴﺔ ﺍﻟﱵ ﺗﻌﺘﻤـﺪ ﻣﺒـﺪﺃ‬ ‫ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﺗﺼﻤﻴﻤﻬﺎ ﻣﺜﻞ ‪ ،CSP(Communicating Sequential Process), OCCAM‬ﺃﻭ ﳝﻜﻦ‬ ‫ﺗﻮﺳﻴﻊ ﻟﻐﺎﺕ ﺍﻟﱪﳎﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﻣﻦ ﺃﺟـﻞ ﺍﺣﺘـﻮﺍﺀ ﺗﻌﻠﻴﻤـﺎﺕ ﺍﻟﺘـﻮﺍﺯﻱ ﻛﻠﻐـﺔ -‪Parallel‬‬ ‫‪ .Parallel-C ،Parallel-Pascal ،FORTRAN‬ﻭﳝﻜﻦ ﻛﺬﻟﻚ ﺇﳊﺎﻕ ﺍﳌﺰﺍﻳﺎ ﺍﳌﺘﻮﺍﺯﻳـﺔ ﺇﱃ ﻟﻐـﺔ‬ ‫ﺗﺴﻠﺴﻠﻴﺔ ﺗﻘﻠﻴﺪﻳﺔ ﻣﺜﻞ ‪ FORTRAN‬ﺃﻭ ++‪ C/C‬ﻭﺫﻟﻚ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺭﻭﺗﻴﻨﺎﺕ ﺍﳌﻜﺘﺒﺎﺕ ﻣﺜﻞ ﻣﻜﺘﺒﺔ‬ ‫‪ MPI‬ﺃﻭ ﻣﻜﺘﺒﺔ ‪ PVM‬ﺃﻭ )‪.Pthreads(POSIX Threads‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫201‬ ‫ﻭﻓﻴﻤﺎ ﻳﻠﻲ ﺳﻮﻑ ﻧﻨﺎﻗﺶ ﺑﺸﻲﺀ ﻣﻦ ﺍﻹﲨﺎﻝ ﻟﻐﺔ ‪ OCCAM‬ﻭﻣﻦ ﰒ ﺳﻮﻑ ﻧﻌﻄﻲ ﶈﺔ ﺳﺮﻳﻌﺔ‬ ‫ﻋﻦ ﻟﻐﺔ 09-‪ FORTRAN‬ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻭﺃﺧﲑﹰﺍ ﺳﻨﺘﻌﺮﺽ ﺑﺸﻜﻞ ﺃﻭﺳﻊ ﻟﻠﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ‬ ‫ﻣﻜﺘﺒﺔ ‪ MPI‬ﻣﻦ ﺧﻼﻝ ﻟﻐﺔ ++‪.C‬‬ ‫1.4 ﻟﻐﺔ‬ ‫‪OCCAM‬‬ ‫ﺗﻌﺪ ﻟﻐﺔ ‪ OCCAM‬ﺇﺣﺪﻯ ﺍﻟﻠﻐﺎﺕ ﺍﳌﺼﻤﻤﺔ ﺧﺼﻴﺼﺎ ﻟﻠﱪﳎﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ ﻓﻬﻲ ﺗﺪﻋﻢ ﺍﻟﺘﻮﺍﺯﻱ‬ ‫ﹰ‬ ‫ﺍﻟﺼﺮﻳﺢ ﻭﺍﻟﺘﻮﺍﺯﻱ ﺍﻟﻈﺎﻫﺮﻱ ﻋﻠﻰ ﻣﻌﺎﰿ ﻭﺍﺣﺪ ﻋﻦ ﻃﺮﻳﻖ ﺗﻘﺴﻴﻢ ﺍﻟﻮﻗﺖ. ﻭﻗـﺪ ﹸﺧـﺬ ﺑﻌـﲔ‬ ‫ﺃ‬ ‫ﺍﻻﻋﺘﺒﺎﺭ ﻋﻨﺪ ﺗﺼﻤﻴﻤﻬﺎ ﻭﺍﺧﺘﻴﺎﺭ ﺗﻌﻠﻴﻤﺎﻬﺗﺎ ﲨﻴﻊ ﺧﺼﺎﺋﺺ ﺍﻟﻌﻤﻞ ﺍﳌﺘـﻮﺍﺯﻱ. ﻭﻗـﺪ ﰎ ﺗﻨﻔﻴـﺬ‬ ‫‪ OCCAM‬ﺑﻌﺪ ﺩﺭﺍﺳﺎﺕ ﻃﻮﻳﻠﺔ ﺟﺮﺕ ﻋﻠﻰ ﺍﻟﱪﺍﻣﺞ ﻭﺍﻟﻠﻐﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ. ﻭﺍﺳﺘﻔﺎﺩﺕ ﻫﺬﻩ ﺍﻟﻠﻐـﺔ‬ ‫ﻣﻦ ﺣﺼﻴﻠﺔ ﺍﻟﺪﺭﺍﺳﺎﺕ ﺍﻟﱵ ﲤﺖ ﻋﻠﻰ ﻟﻐﺔ )‪ CSP(Communicating Sequential Process‬ﻭﺍﻟﱵ‬ ‫ﺗﻌﺪ ﺇﺣﺪﻯ ﺃﻭﺍﺋﻞ ﺍﻟﻠﻐﺎﺕ ﺍﳌﺼﻤﻤﺔ ﳍﺬﺍ ﺍﻟﻐﺮﺽ ﻭﺃﻛﺜﺮﻫﺎ ﻣﻼﺋﻤﺔ ﻟﺘﻮﺻﻴﻒ ﻭﻛﺘﺎﺑـﺔ ﺍﻟـﱪﺍﻣﺞ‬ ‫ﺍﳌﺘﻮﺍﺯﻳﺔ، ﺇﻻ ﺃﻥ ﻟﻐﺔ ‪ CSP‬ﱂ ﺗﻠﻖ ﺍﻟﺸﻬﺮﺓ ﻧﺘﻴﺠﺔ ﻛﻮﻬﻧﺎ ﻟﻐﺔ ﺗﻮﺻﻴﻔﻴﺔ، ﻭﺍﻗﺘﺼﺮ ﺍﺳﺘﺨﺪﺍﻣﻬﺎ ﻋﻠﻰ‬ ‫ﻣﺴﺘﻮﻯ ﻣﺮﺍﻛﺰ ﺍﻟﺪﺭﺍﺳﺎﺕ ﻭﺍﳌﺨﺘﱪﺍﺕ ﺍﻟﻌﻠﻤﻴﺔ.‬ ‫ﺻﻤﻤﺖ ﻟﻐﺔ ‪ OCCAM‬ﻟﻜﻲ ﺗﻨﻔﺬ ﺑﺸﻜﻞ ﺧﺎﺹ ﻋﻠﻰ ﻣﻌﺎﳉـﺎﺕ ﺍﻟﺘﺮﺍﻧـﺴﺒﻴﻮﺗﺮ١، ﻭﻫـﻲ‬ ‫ﺗﺴﺘﺜﻤﺮ ﺇﱃ ﺣﺪ ﻛﺒﲑ ﺍﳋﻮﺍﺹ ﺍﻟﺼﻠﺒﺔ ﻭﺍﻟﺒﻨﻴﻮﻳﺔ ﳍﺬﺍ ﺍﳌﻌﺎﰿ. ﻭﻟﻜﻮﻥ ﺗﻌﻠﻴﻤﺎﺕ ﻫﺬﻩ ﺍﻟﻠﻐﺔ ﻭﺧﺎﺻﺔ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﳌﺮﺗﺒﻄﺔ ﺑﺎﻟﺘﻔﺮﻳﻊ ﺫﺍﺕ ﻣﺴﺘﻮﻯ ﻗﺮﻳﺐ ﻣﻦ ﺍﳌﻌﺎﰿ ﻓﺈﻥ ﺍﺳﺘﺨﺪﺍﻣﻬﺎ ﺍﻗﺘـﺼﺮ ﻋﻠـﻰ‬ ‫ﺍﳌﺨﺘﺼﲔ ﰲ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﺍﺯﻳﺔ، ﻭﱂ ﺗﻠﻖ ﺭﻭﺍﺟﺎ ﻣﻦ ﻗﺒﻞ ﺍﳌﺴﺘﺨﺪﻣﲔ ﺍﻟﺘﻘﻠﻴﺪﻳﲔ.‬ ‫ﹰ‬ ‫ﻳﺘﺄﻟﻒ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﻭﻓﻖ ﻟﻐﺔ ‪ OCCAM‬ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﳌﻬﻤﺎﺕ ﺗﺪﻋﻰ ‪ Process‬ﻭﺗﻨﻔـﺬ‬ ‫ﻛﻞ ﻣﻨﻬﺎ ﻋﻠﻰ ﻋﻘﺪﺓ ﻣﻦ ﻋﻘﺪ ﺍﳌﻌﺎﳉﺔ ﺍﳌﺘﻮﻓﺮﺓ. ﻭﺗﺘﺄﻟﻒ ﻛﻞ ﻣﻬﻤﺔ ﻣﻦ ﻋﺪﺩ ﻣﻦ ﺍﳌﻬﻤﺎﺕ ﺍﳉﺰﺋﻴﺔ‬ ‫ﺍﳌﺘﺴﺎﻳﺮﺓ )‪ (Concurrent Processes‬ﺍﻟﱵ ﺗﺸﺎﺭﻙ ﰲ ﺍﺳﺘﺜﻤﺎﺭ ﻣﺼﺎﺩﺭ ﺍﳌﻌﺎﰿ ﺍﻟﻮﺣﻴﺪ.‬ ‫1‬ ‫اﻟﺘﺮاﻧﺴﺒﻴﻮﺕﺮ ‪ :Transputer‬اﺧﺘﺼﺎر ﻟﻠﺠﻤﻠﺔ ‪ .transistor computer‬وهﻮ ﺡﺎﺳﺐ ﺁﻟﻲ ﻣﺘﻜﺎﻣﻞ ﻋﻠﻰ ﺷﺮﻳﺤﺔ واﺡﺪة، ﻳﺘﻀﻤﻦ ذاآﺮة وﺻﻮل‬ ‫ﻋﺸﻮاﺋﻲ)‪ (RAM‬و وﺡﺪة ﺡﺱﺎﺑﻴﺔ ﻟﻠﻨﻘﻄﺔ-اﻟﻌﺎﺋﻤﺔ )‪ ،(FPU‬وﻳﺼﻤﻢ هﺬا اﻟﺠﻬﺎز أﺳﺎﺳً آﻮﺡﺪة ﺑﻨﺎء ﻷﻧﻈﻤﺔ اﻟﺤﺱﺎب اﻟﻤﺘﻮازي.‬ ‫ﺎ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫301‬ ‫ﻭﰲ ‪ OCCAM‬ﻳﺘﻢ ﺍﻻﺗﺼﺎﻝ ﺑﲔ ﺍﳌﻬﻤﺎﺕ ﺍﳌﺘﺴﺎﻳﺮﺓ ﻋﻦ ﻃﺮﻳﻖ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﻣﻦ ﻗﻨـﺎﺓ ﺇﱃ‬ ‫ﻗﻨﺎﺓ.‬ ‫ﻭﻳﻮﺟﺪ ﰲ ‪ OCCAM‬ﲬﺲ ﻣﻬﻤﺎﺕ ﺃﺳﺎﺳﻴﺔ، ﺍﻟﺘﺨﺼﻴﺺ، ﺍﻹﺭﺳﺎﻝ، ﺍﻻﺳﺘﻘﺒﺎﻝ، ﲡـﺎﻭﺯ،‬ ‫ﺇﻳﻘﺎﻑ، ﻭﻓﻴﻤﺎ ﻳﻠﻲ ﺇﻳﻀﺎﺡ ﺍﻟﺘﺮﻛﻴﺐ ﺍﻟﻨﺤﻮﻱ ﳍﺎ ﻣﻊ ﺍﻟﺘﻤﺜﻴﻞ:‬ ‫ﺍﻟﺘﺨﺼﻴﺺ: ﻫﻮ ﺇﺳﻨﺎﺩ ﻗﻴﻤﺔ ﺗﻌﺒﲑ ﺣﺴﺎﰊ ﺇﱃ ﻣﺘﻐﲑ‬ ‫>‪<variable>:=<expression‬‬ ‫1+ ‪x :=y‬‬ ‫:‪SYNTAX‬‬ ‫:‪Example‬‬ ‫ﺍﻻﺳﺘﻘﺒﺎﻝ: ﺍﺳﺘﻘﺒﺎﻝ ﻗﻴﻤﺔ ﻣﻦ ﻗﻨﺎﺓ. ﻳﺴﺘﺨﺪﻡ ﺍﻟﺮﻣﺰ "?" ﻟﻠﺘﻌﺒﲑ ﻋﻦ ﺍﻟﻄﻠﺐ‬ ‫>‪<channel>?<variable‬‬ ‫‪Ch ? x‬‬ ‫:‪SYNTAX‬‬ ‫:‪Example‬‬ ‫ﺍﻹﺭﺳﺎﻝ: ﺇﺭﺳﺎﻝ ﻗﻴﻤﺔ ﺗﻌﺒﲑ ﺇﱃ ﻗﻨﺎﺓ. ﻳﺴﺘﺨﺪﻡ ﺍﻟﺮﻣﺰ "!" ﻟﻠﺘﻌﺒﲑ ﻋﻦ ﺍﻟﻨﺪﺍﺀ.‬ ‫>‪<channel>!<expression‬‬ ‫1+ ‪Ch !y‬‬ ‫:‪SYNTAX‬‬ ‫:‪Example‬‬ ‫ﺍﻟﺘﺠﺎﻭﺯ: ﲡﺎﻭﺯ ﺍﳌﻬﻤﺔ ﺍﳊﺎﻟﻴﺔ‬ ‫‪SKIP‬‬ ‫‪SKIP‬‬ ‫:‪SYNTAX‬‬ ‫:‪Example‬‬ ‫ﺍﻹﻳﻘﺎﻑ: ﺇﻳﻘﺎﻑ ﺍﳌﻬﻤﺔ‬ ‫‪STOP‬‬ ‫‪STOP‬‬ ‫:‪SYNTAX‬‬ ‫:‪Example‬‬ ‫ﻭﺑﺸﻜﻞ ﺗﺪﺭﳚﻲ )‪ (Hierarchical‬ﺗﺘﺄﻟﻒ ﺃﻳﻀﺎ ﻛﻞ ﻣﻬﻤﺔ ﺟﺰﺋﻴﺔ ﻣﻦ ﻋﺪﺩ ﻣـﻦ ﺍﳌﻬﻤـﺎﺕ‬ ‫ﺍﳉﺰﺋﻴﺔ ﺍﳌﺘﺴﺎﻳﺮﺓ ﺃﻭ ﺍﳌﺘﺴﻠﺴﻠﺔ، ﻭﻗﺪ ﻳﻜﻮﻥ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﻫﺬﻩ ﺍﻟﻠﻐﺔ ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﻟﺘﻌﻠﻴﻤﺔ ﺍﻟﻮﺍﺣﺪﺓ‬ ‫ﺃﻭ ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺩﺍﺧﻞ ﺍﳌﻌﺎﰿ ﺍﻟﻮﺣﻴﺪ. ﻭﻫﻜﺬﺍ ﻳﺘﻀﻤﻦ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﻋﻨﺪ ﺗﻨﻔﻴﺬﻩ‬ ‫ﻋﻠﻰ ﻣﺌﺎﺕ ﺍﳌﻬﻤﺎﺕ ﺍﳌﺘﺴﺎﻳﺮﺓ ﻭﺍﳌﺘﻮﺍﺯﻳﺔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫401‬ ‫ﻳﻌﱪ ﻋﻦ ﺍﻟﺘﻮﺍﺯﻱ ﰲ ﻟﻐﺔ ‪ OCCAM‬ﺑﺘﻮﺯﻳﻊ ﺍﳌﻬﻤﺎﺕ ﺫﺍﺕ ﺍﳌﺴﺘﻮﻯ ﺍﻷﻋﻠﻰ ﻋﻠﻰ ﻣﻌﺎﳉـﺎﺕ‬ ‫ﳐﺘﻠﻔﺔ، ﺃﻣﺎ ﺩﺍﺧﻞ ﺍﳌﻬﻤﺔ ﻓﻴﻌﱪ ﻋﻨﻪ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺇﺟﺮﺍﺋﻴﺔ ﺍﻟﺘﺤﻜﻢ ‪ PAR‬ﻭﺍﻟﱵ ﺗﻌﲏ ﺗﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ‬ ‫ﺃﻭ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﻟﱵ ﺗﻠﻲ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯ )ﺗﻮﺍﺯﻱ ﻇﺎﻫﺮﻱ(.‬ ‫ﻓﻠﻠﺘﻌﺒﲑ ﻋﻦ ﺍﻟﺘﻮﺍﺯﻱ ﺃﺛﻨﺎﺀ ﺗﻨﻔﻴﺬ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ 3‪ S1,S2,S‬ﳝﻜﻦ ﻛﺘﺎﺑﺔ:‬ ‫/* ‪/*Ececute the following in parallel‬‬ ‫‪PAR‬‬ ‫1‪S‬‬ ‫2‪S‬‬ ‫3‪S‬‬ ‫ﻛﻤﺎ ﳝﻜﻦ ﺗﻌﺮﻳﻒ ﺍﻹﺟﺮﺍﺋﻴﺘﲔ 2‪ S1,S‬ﻣﺒﺎﺷﺮﺓ ﺩﺍﺧﻞ ﺇﺟﺮﺍﺋﻴﺔ ‪ PAR‬ﻋﻠﻰ ﺍﻟﺸﻜﻞ ﺍﻟﺘﺎﱄ:‬ ‫/* ‪/*Declaration of the channels‬‬ ‫;‪CHAN OF INT in,out,middle‬‬ ‫‪PAR‬‬ ‫:‪INT X‬‬ ‫‪WHILE TRUE‬‬ ‫‪SEQ‬‬ ‫‪In ? X‬‬ ‫‪Middle ! X‬‬ ‫/*‪/*The second process‬‬ ‫:‪INT X‬‬ ‫‪WHILE TRUE‬‬ ‫‪SEQ‬‬ ‫‪MIDDLE ? X‬‬ ‫‪OUT ! X‬‬ ‫/*‪/*Execute in parallel‬‬ ‫/*‪/*The first process‬‬ ‫/*‪/*Sequential execution of following‬‬ ‫/*‪/*Read X from in channel‬‬ ‫/*‪/*Write X on middle channel‬‬ ‫/*‪/*Sequential execution of following‬‬ ‫/*‪/*Read X from in channel‬‬ ‫/*‪/*Write X on out channel‬‬ ‫ﳝﺜﻞ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ‪ buffer‬ﺫﻭ ﻋﻨﺼﺮﻳﻦ ﻭﻫﻮ ﻳﺘﺄﻟﻒ ﻣﻦ ﻣﻬﻤﺘﲔ ﻣﺘﺴﺎﻳﺮﺗﲔ ﺗﺘﻔﺎﻋﻼﻥ ﻋﻦ ﻃﺮﻳﻖ‬ ‫ﻗﻨﺎﺓ ﺍﻻﺗﺼﺎﻝ ‪ Middle‬ﻛﻤﺎ ﻳﺒﲔ ﺍﻟﺸﻜﻞ)1-4(‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫501‬ ‫اﻟﺸﻜﻞ)1-4(‬ ‫ﺗﺴﺘﺨﺪﻡ ﺗﻌﻠﻴﻤﺔ ‪ PAR‬ﺃﻳﻀﺎ ﻛﺈﺟﺮﺍﺋﻴﺔ ﺗﻜﺮﺍﺭ ﻓﺘﻘﻮﻡ ﲞﻠﻖ ﻋﺪﺓ ﻣﻬﻤﺎﺕ ﻣﺘﺸﺎﻬﺑﺔ ﻭﻣﺘﺴﺎﻳﺮﺓ ﺗﻨﻔﺬ‬ ‫ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ، ﻭﻳﺘﻢ ﺫﻟﻚ ﻋﻠﻰ ﺍﻟﺸﻜﻞ ﺍﻟﺘﺎﱄ:‬ ‫/*‪/*create n process and execute then in parallel‬‬ ‫‪PAR I=0 FOR N‬‬ ‫]‪S[I‬‬ ‫ﲤﺜﻞ ﺇﺟﺮﺍﺋﻴﺔ ‪ PAR‬ﻣﻬﻤﺔ ﻣﺆﻟﻔﺔ ﻣﻦ ﻋﺪﺓ ﻣﻬﻤﺎﺕ ﺗﻨﻔﺬ ﻋﻠﻰ ﺍﻟﺘﻮﺍﺯﻱ ﻭﺗﻨﺘﻬﻲ ﻫﺬﻩ ﺍﳌﻬﻤـﺔ ﺍﻷﻡ‬ ‫ﻋﻨﺪ ﺍﻧﺘﻬﺎﺀ ﲨﻴﻊ ﺍﳌﻬﻤﺎﺕ ﺍﳉﺰﺋﻴﺔ ﺍﳌﺆﻟﻔﺔ ﳍﺎ.‬ ‫ﻳﺴﺘﻔﺎﺩ ﻣﻦ ﺇﺟﺮﺍﺋﻴﺔ ‪ PAR‬ﺃﻳﻀﺎ ﻣﻦ ﺃﺟﻞ:‬ ‫ﺗﻌﺮﻳﻒ ﺑﻨﻴﺔ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﻋﻠﻰ ﻣﺴﺘﻮﻯ ﺍﳌﻬﺎﻡ ﺍﻟﻌﻠﻴﺎ.‬ ‫ﺗﻮﺿﻴﻊ ﺍﳌﻬﻤﺎﺕ ﻋﻠﻰ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﺨﺘﻠﻔﺔ.‬ ‫ﻭﺗﺄﺧﺬ ﻫﺬﻩ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺷﻜﻞ ‪ PLACED PAR‬ﰲ ﻫﺬﻩ ﺍﳊﺎﻟﺔ ﻭﺗﺴﺘﺨﺪﻡ ﻛﻤﺎ ﻳﻠﻲ:‬ ‫/*‪/*Place in parallel‬‬ ‫/*‪/*Processor naming‬‬ ‫/*1 ‪/*Task P1 on processor‬‬ ‫/*‪/*Processor naming‬‬ ‫/*2 ‪/*Task P2 on processor‬‬ ‫/*‪/*Processor naming‬‬ ‫/*3 ‪/*Task P3 on processor‬‬ ‫‪PLACED PAR‬‬ ‫1 ‪PROCESSOR‬‬ ‫1‪P‬‬ ‫2 ‪PROCESSOR‬‬ ‫2‪P‬‬ ‫3 ‪PROCESSOR‬‬ ‫3‪P‬‬ ‫ﻛﻤﺎ ﻻﺣﻈﻨﺎ ﰲ ﻣﺜﺎﻝ ﺳﺎﺑﻖ، ﺗﺘﻮﻓﺮ ﰲ ﻟﻐﺔ ‪ OCCAM‬ﺇﺟﺮﺍﺋﻴﺔ ﲢﻜﻢ ﻣﻌﺎﻛﺴﺔ ﻟﺘﻌﻠﻴﻤـﺔ ‪،PAR‬‬ ‫ﻭﻫﻲ ﺗﻌﻠﻴﻤﺔ ‪ SEQ‬ﺍﻟﱵ ﺗﻌﱪ ﻋﻦ ﺍﻟﺘﻨﻔﻴﺬ ﺍﻟﺘﺴﻠﺴﻠﻲ ﻟﻠﺘﻌﻠﻴﻤﺎﺕ ﺍﻟﱵ ﺗﻠﻴﻬﺎ.‬ ‫ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺑﻨﺎﺀ ﺍﳌﻬﻤﺎﺕ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻟﻐﺔ ‪:OCCAM‬‬ ‫ﺑﺸﻜﻞ ﺳﻜﻮﱐ: ﺣﻴﺚ ﺗﺘﻮﺿﻊ ﰲ ﺑﺪﺍﻳﺔ ﺍﻟﺘﻨﻔﻴﺬ ﻣﻬﻤﺔ ﻋﻠﻰ ﻛﻞ ﻣﻌﺎﰿ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫601‬ ‫ﺑﺸﻜﻞ ﺁﱄ: ﺣﻴﺚ ﺗﻘﻮﻡ ﻛﻞ ﻣﻬﻤﺔ ﲞﻠﻖ ﻣﻬﻤﺕ ﺟﺰﺋﻴﺔ ﺩﺍﺧﻠﻴﺔ ﺑﺎﻻﺳﺘﻌﺎﻧﺔ ﺑﺈﺟﺮﺍﺋﻴـﺔ ‪.PAR‬‬ ‫ﻭﻟﻜﻦ ﻻ ﳝﻜﻦ ﳌﻬﻤﺔ ﺧﻠﻖ ﻣﻬﻤﺔ ﺟﺰﺋﻴﺔ ﻋﻠﻰ ﻣﻌﺎﰿ ﺁﺧﺮ، ﻓﺨﻠﻖ ﺍﳌﻬﻤﺎﺕ ﳏﻠﻲ ﺗﺪﺭﳚﻲ.‬ ‫ﺗﺘﻔﺎﻋﻞ ﺍﳌﻬﻤﺎﺕ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ ﻋﻦ ﻃﺮﻳﻖ ﺗﺒﺎﺩﻝ ﺍﻟﺮﺳﺎﺋﻞ ﻋﱪ ﻗﻨﻮﺍﺕ ﻭﺣﻴﺪﺓ ﺍﻻﲡﺎﻩ ﺗـﺼﻞ ﺑـﲔ‬ ‫ﺍﳌﻬﻤﺘﲔ. ﻭﺍﻟﺘﺮﺍﺳﻞ ﰲ ﻟﻐﺔ ‪ OCCAM‬ﻫﻮ ﺗﺮﺍﺳﻞ ﻣﺘﺰﺍﻣﻦ ﻭﻳﺘﻢ ﺑﻌﺪ ﺍﻻﻟﺘﻘﺎﺀ ﺑﻨﻘﻄﺔ ﻣﻮﻋﺪ ﻓﻴﻤـﺎ‬ ‫ﺑﲔ ﺍﳌﻬﻤﺘﲔ ﺍﳌﺘﺮﺍﺳﻠﺘﲔ.‬ ‫ﳝﻜﻦ ﺃﻥ ﺗﺘﺼﻞ ﻣﻬﻤﺘﲔ ﻣﻦ ﻣﻬﻤﺎﺕ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﺑﻌﺪﺓ ﻗﻨﻮﺍﺕ ﺍﺗﺼﺎﻝ، ﻭﻛﻞ ﻗﻨﺎﺓ ﲣﺼﺺ‬ ‫ﻟﻨﻘﻞ ﻧﻮﻉ ﻣﻌﲔ ‪ Type‬ﻭﳏﺪﺩ ﻣﻦ ﺍﳌﻌﻠﻮﻣﺎﺕ، ﻛﻤﺎ ﺗﻨﻘﻞ ﺍﳌﻌﻄﻴﺎﺕ ﻭﻓﻖ ﺑﺮﻭﺗﻮﻛـﻮﻝ ﻣﻌـﺮﻑ‬ ‫ﻭﳏﺪﺩ.‬ ‫ﻭﻳﺘﻢ ﲢﺪﻳﺪ ﻧﻮﻉ ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﺘﺒﺎﺩﻟﺔ ﻋﻨﺪ ﺗﻌﺮﻳﻒ ﻗﻨﺎﺓ ﺍﻻﺗﺼﺎﻝ ﻋﻠﻰ ﺍﻟﺸﻜﻞ ﺍﻟﺘﺎﱄ:‬ ‫;>2‪CHAN OF <TYPE> <NAME1>,<NAME‬‬ ‫:‪Example‬‬ ‫;‪CHAN OF INT ch‬‬ ‫/*‪/*ch: channel of type integer‬‬ ‫ﺗﻌﺮﻑ ﺑﺮﻭﺗﻮﻛﻮﻻﺕ ﻧﻘﻞ ﺍﳌﻌﻠﻮﻣﺎﺕ ﰲ ﻟﻐﺔ ‪ OCCAM‬ﻟﻠﺘﻤﻜﻦ ﻣﻦ ﻧﻘﻞ ﻣﻌﻄﻴﺎﺕ ﻣﺮﻛﺒﺔ ﻭﻣﻦ‬ ‫ﺃﳕﺎﻁ ‪ Types‬ﳐﺘﻠﻔﺔ ﻋﱪ ﻧﻔﺲ ﺍﻟﻘﻨﺎﺓ، ﻓﻺﺭﺳﺎﻝ ﻋﺪﺩ ﻃﺒﻴﻌﻲ ﻳﻠﻴـﻪ ﻋـﺪﺩ ﺻـﺤﻴﺢ ﻳﻌـﺮﻑ‬ ‫ﺍﻟﱪﻭﺗﻮﻛﻮﻝ ﺍﻟﺘﺎﱄ:‬ ‫‪PROTOCOL int.real IS‬‬ ‫;23‪INT;REAL‬‬ ‫:‪CHAN OF int.real ch‬‬ ‫:‪INT inint;outint‬‬ ‫:‪REAL32 inreal;outreal‬‬ ‫‪PAR‬‬ ‫‪Ch ! outint;outreal‬‬ ‫‪Ch ? inint;inreal‬‬ ‫ﻳﺘﻢ ﺗﺮﺍﺳﻞ ﺍﳌﻌﻄﻴﺎﺕ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺗﻌﻠﻴﻤﺘﲔ : ﺍﻷﻭﱃ ﻹﺭﺳﺎﻝ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻭﻫـﻲ )!(، ﻭﺍﻷﺧـﺮﻯ‬ ‫ﻻﺳﺘﻘﺒﺎﻝ ﺍﳌﻌﻠﻮﻣﺎﺕ ﻭﻫﻲ )؟(. ﻭﳛﺪﺩ ﻧﻮﻉ ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﺘﺒﺎﺩﻟﺔ ﻛﻤﺎ ﺫﻛﺮﻧﺎ ﻋﻨﺪ ﺗﻌﺮﻳـﻒ ﻗﻨـﺎﺓ‬ ‫ﺍﻻﺗﺼﺎﻝ ﺍﻟﱵ ﺗﺼﻞ ﺑﲔ ﻣﻬﻤﺘﲔ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫701‬ ‫ﻣﺜﺎﻝ: ﺗﺘﺒﺎﺩﻝ ﺍﳌﻬﻤﺘﲔ 2‪ P1,P‬ﺍﳌﻌﻠﻮﻣﺎﺕ ﺑﺎﲡﺎﻫﲔ ﻋﱪ ﺍﻟﻘﻨﺎﺗﲔ 2‪ Chan1,Chan‬ﰲ‬ ‫2(.‬ ‫ﺍﻟـﺸﻜﻞ)-4‬ ‫اﻟﺸﻜﻞ)2-4(‬ ‫ﺗﺘﻀﻤﻦ ﻟﻐﺔ ‪ OCCAM‬ﺃﻳﻀﺎ ﺇﺟﺮﺍﺋﻴﺔ ‪ PRI‬ﻟﺘﺤﺪﻳﺪ ﺃﻭﻟﻮﻳﺔ )‪ (Prioraty‬ﺗﻨﻔﻴﺬ ﻣﻬﻤـﺔ ﺑﺎﻟﻨـﺴﺒﺔ‬ ‫ﻟﻠﻤﻬﻤﺎﺕ ﺍﻷﺧﺮﻯ ﻓﻴﻜﻮﻥ ﳍﺬﻩ ﺍﳌﻬﻤﺔ ﺃﻭﻟﻮﻳﺔ ﺍﻟﺘﻨﻔﻴﺬ ﺑﲔ ﺍﳌﻬﻤﺎﺕ ﺍﳌﺘﺴﺎﻳﺮﺓ ﻋﻠﻰ ﺍﳌﻌﺎﰿ ﺍﻟﻮﺍﺣﺪ.‬ ‫ﻭﺑﺎﺳﺘﺨﺪﺍﻡ ﺇﺟﺮﺍﺋﻴﺔ ‪ ،PRI‬ﺗﻨﻔﺬ ﺍﳌﻬﻤﺎﺕ ﻭﻓﻘﺎ ﻟﺘﺮﺗﻴﺒﻬﺎ ﻓﺎﻷﻭﻟﻮﻳﺔ ﺍﻷﻭﱃ ﺗﻜﻮﻥ ﻟﻠﻤﻬﻤـﺔ ﺍﻷﻭﱃ‬ ‫ﹰ‬ ‫ﻭﻣﻦ ﰒ ﻟﻠﻤﻬﻤﺔ ﺍﻟﱵ ﺗﻠﻴﻬﺎ ﻭﻫﻜﺬﺍ.‬ ‫ﻳﻮﺿﺢ ﺍﳌﺜﺎﻝ ﺍﻟﺘﺎﱄ ﺇﺟﺮﺍﺋﻴﺔ ‪:PRI‬‬ ‫/*‪/*highest priority‬‬ ‫/*‪/*three process with middle priority‬‬ ‫/*‪/*lowest priority‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫‪PRI PAR‬‬ ‫1‪P‬‬ ‫‪PAR‬ ‫2‪P‬‬ ‫3‪P‬‬ ‫4‪P‬‬ ‫5‪P‬‬ ‫801‬ ‫ﺗﺴﺘﺨﺪﻡ ﰲ ﻟﻐﺔ ‪ OCCAM‬ﺃﻳﻀﺎ ﺇﺟﺮﺍﺋﻴﺎﺕ ﺍﻟﺘﺤﻜﻢ ﺍﳌﻌﺮﻭﻓﺔ ﰲ ﺍﻟﻠﻐﺎﺕ ﺍﻟﱪﳎﻴﺔ ﺍﻟﺘﻘﻠﻴﺪﻳﺔ ﻣﺜـﻞ‬ ‫…,‪ FOR, WHILE‬ﻭﻏﲑﻫﺎ، ﻭﺍﺳﺘﺨﺪﺍﻣﻬﺎ ﻳﻄﺎﺑﻖ ﺍﺳﺘﺨﺪﺍﻡ ﻣﺎ ﳝﺎﺛﻠﻬﺎ ﰲ ﺍﻟﻠﻐـﺎﺕ ﺍﻟﺘﻘﻠﻴﺪﻳـﺔ‬ ‫ﺍﳌﻌﺮﻭﻓﺔ.‬ ‫ﻭﺃﺧﲑﹰﺍ ﺗﺘﻀﻤﻦ ﻟﻐﺔ ‪ OCCAM‬ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺇﺟﺮﺍﺋﻴﺔ ﺗﻮﺿﻴﻊ ﺍﳌﻬﻤﺎﺕ ﻋﻠﻰ ﺍﳌﻌﺎﳉﺎﺕ، ﺇﺟﺮﺍﺋﻴـﺔ‬ ‫ﺃﺧﺮﻯ ﻟﺘﻮﺿﻴﻊ ﻗﻨﻮﺍﺕ ﺍﻻﺗﺼﺎﻝ ﺍﻟﱵ ﺗﺼﻞ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻬﻤﺎﺕ ﺍﳌﺘﻮﺿﻌﺔ ﻋﻠﻰ ﻣﻌﺎﳉﺎﺕ ﳐﺘﻠﻔـﺔ‬ ‫ﻋﻠﻰ ﺍﻟﻘﻨﻮﺍﺕ ﺍﻟﻔﻴﺰﻳﺎﺋﻴﺔ ﺍﻟﱵ ﺗﺼﻞ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ.‬ ‫ﻭﻳﻮﺿﺢ ﺍﳌﺜﺎﻝ ﺍﻟﺘﺎﱄ ﻃﺮﻳﻘﺔ ﺗﻮﺿﻴﻊ ﻣﻬﻤﺔ ﺗﺘﺤﻜﻢ ﺑﻠﻮﺣﺔ ﺍﳌﻔﺎﺗﻴﺢ ﻭﺗﺘﺼﻞ ﻋﱪ ﻗﻨـﺎﰐ ﺍﺗـﺼﺎﻝ‬ ‫ﺑﺎﳌﻬﻤﺎﺕ ﺍﻷﺧﺮﻯ ﻋﱪ ﻗﻨﺎﰐ ﺍﺗﺼﺎﻝ ﻋﻠﻰ ﻣﻌﺎﰿ 1‪ .Processor‬ﻛﻤﺎ ﻳﻮﺿﺢ ﺗﻮﺿـﻴﻊ ﻗﻨـﻮﺍﺕ‬ ‫ﺍﻻﺗﺼﺎﻝ ﺍﳌﻨﻄﻘﻴﺔ ﻋﻠﻰ ﻗﻨﻮﺍﺕ ﺍﻻﺗﺼﺎﻝ ﺍﻟﻔﻴﺰﻳﺎﺋﻴﺔ:‬ ‫/*‪/*Plase in parallel‬‬ ‫/*‪/*Processor naming‬‬ ‫/*)‪/*Place channel in on link 0 in(physical channel‬‬ ‫/*)‪/*Place channel out on link 1 out(physical channel‬‬ ‫/*0 ‪/*Place process keybord on Processor‬‬ ‫/*‪/*Processor naming Processor‬‬ ‫/*‪/*Place channel out 1 on link 0 out‬‬ ‫/*‪/*Place channel in 1 on link 1 in‬‬ ‫/*1 ‪/*Place process screen on Processor‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫‪PLACED PAR‬‬ ‫0 ‪PROCESSOR‬‬ ‫‪PLACE in AT link 0 in‬‬ ‫‪PLACE out AT link 1 out‬‬ ‫)‪keyboard(in,out‬‬ ‫1 ‪PROCESSOR‬‬ ‫‪PLACE out 1 AT link 0 out‬‬ ‫‪PLACE in 1 AT link 0 in‬‬ ‫)1 ‪Screen(in 1, out‬‬ ‫901‬ ‫2.4 ﻟﻐﺔ 09 -‪: FORTRAN‬‬ ‫ﻫﻲ ﻟﻐﺔ ﻣﺸﺘﻘﺔ ﻣﻦ ﻟﻐﺔ ‪ FORTRAN‬ﻭﻟﻜﻨﻬﺎ ﻣﺼﻤﻤﺔ ﻟﻠﺤﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺍﻟﱵ ﻣﻦ ﳕﻂ‬ ‫ﺣﻴﺚ ﺗﻘﻮﻡ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﺘﺰﺍﻣﻨﺔ ﺑﺘﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺔ ﻧﻔﺴﻬﺎ ﰲ ﻛﻞ ﳊﻈﺔ.‬ ‫ﻳﻈﻬﺮ ﺍﻟﺘﻮﺍﺯﻱ ﺑﺎﻟﻌﻤﻠﻴﺎﺕ ﻋﻠﻰ ﺍﳌﺘﺠﻬﺎﺕ ﰲ ﻫﺬﻩ ﺍﻟﻠﻐﺔ ﺣﻴﺚ ﳝﻜـﻦ ﺃﻥ ﺗﻜـﻮﻥ ﻣﻌـﺎﻣﻼﺕ‬ ‫ﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﳊﺴﺎﺑﻴﺔ ﻣﺘﺠﻬﺎﺕ ﺫﺍﺕ ﺑﻌﺪ ‪ N‬ﻓﺒﺪﻝ ﻛﺘﺎﺑﺔ ﺍﳊﻠﻘﺔ ﺍﻟﺘﺎﻟﻴﺔ:‬ ‫‪SIMD‬‬ ‫‪for I=1,N‬‬ ‫]‪A[I]=B[I] + C[I‬‬ ‫ﳝﻜﻨﻨﺎ ﺑﺎﺳﺘﺨﺪﺍﻡ ﻫﺬﻩ ﺍﻟﻠﻐﺔ ﻛﺘﺎﺑﺔ ﺗﻌﻠﻴﻤﺎﺕ ﻋﻠﻰ ﺍﳌﺘﺠﻬﺎﺕ ﻋﻠﻰ ﺍﻟﺸﻜﻞ ﺍﻟﺘﺎﱄ:‬ ‫)‪A(1:N)=B(1:N) + C(1:N‬‬ ‫)1+‪T(1:N)=A(2:N‬‬ ‫)‪B(1:N)=2*T(1:N‬‬ ‫ﻳﻘﻮﻡ ﻛﻞ ﻣﻌﺎﰿ ﺑﺘﻨﻔﻴﺬ ﺍﻟﻌﻤﻠﻴﺔ ﺍﶈﺪﺩﺓ ﻋﻠﻰ ﺟﺰﺀ ﺍﳌﺘﺠﻬﺔ ﺍﳌﺨﺼﺼﺔ ﻟﻪ. ﻭﻳﺘﻌﺎﻭﻥ ﺍﳌﺘﺮﺟﻢ ﻭﻧﻈﺎﻡ‬ ‫ﺍﻟﺘﺸﻐﻴﻞ ﻋﻠﻰ ﻫﺬﻩ ﺍﳊﺎﺳﺒﺎﺕ ﻟﺘﻨﻔﻴﺬ ﻫﺬﻩ ﺍﻟﺘﻌﻠﻴﻤﺎﺕ ﺍﳌﻌﻘﺪﺓ. ﻛﻤﺎ ﻳﻠﻌﺐ ﺍﻟﺘﺰﺍﻣﻦ ﺩﻭﺭﹰﺍ ﺃﺳﺎﺳـﻴﺎ‬ ‫ﹰ‬ ‫ﰲ ﺗﺴﻬﻴﻞ ﺗﻨﻔﻴﺬ ﺍﻟﺘﻌﻠﻴﻤﺔ.‬ ‫ﺃﻣﺎ ﻟﻐﺔ ‪ HPF:Hight Performance FORTRAN‬ﻓﻬﻲ ﺣﺪﺙ ﺍﻟﻠﻐﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺍﳌﺘﻮﻓﺮﺓ ﻋﻠـﻰ‬ ‫ﺍﳊﻮﺍﺳﻴﺐ ﺍﳊﺪﻳﺜﺔ، ﻭﻫﻲ ﺗﻌﺘﻤﺪ ﻣﺒﺪﺃ ﺗﻮﺯﻳﻊ ﺍﳌﻌﻄﻴﺎﺕ، ﻭﺗﻌﺪ ﺇﺣﺪﻯ ﺃﻭﱃ ﺍﻟﻠﻐﺎﺕ ﺍﳌﻌﻴﺎﺭﻳﺔ ﳍﺬﺍ‬ ‫ﺍﻟﻨﻮﻉ ﻣﻦ ﺍﳊﻮﺍﺳﻴﺐ.‬ ‫ﺗﻌﺘﻤﺪ ﻫﺬﻩ ﺍﻟﻠﻐﺔ ﻣﺒﺎﺩﺉ ﻟﻐﺔ 09-‪ FORTRAN‬ﻭﲢﺘﻮﻱ ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺫﻟﻚ ﺗﻌﻠﻴﻤـﺎﺕ ﻟﺘﻮﺯﻳـﻊ‬ ‫ﺍﳌﻌﻄﻴﺎﺕ ﻋﻠﻰ ﺍﳌﻌﺎﳉﺎﺕ ﺍﳌﺨﺘﻠﻔﺔ ﺍﳌﺘﻮﻓﺮﺓ. ﺃﻣﺎ ﻋﻤﻠﻴﺎﺕ ﺍﻻﺗﺼﺎﻝ ﻓﻴﻤﺎ ﺑﲔ ﺍﳌﻌﺎﳉﺎﺕ ﻓﻬﻲ ﺷﻔﺎﻓﺔ‬ ‫ﺑﺎﻟﻨﺴﺒﺔ ﻟﻠﻤﺴﺘﺜﻤﺮ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫011‬ ‫3.4‬ ‫ﻭﺍﺟﻬﺔ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ‪Message Passing Interface MPI‬‬ ‫ﺇﻥ ﻭﺍﺟﻬﺔ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ‪ MPI‬ﺃﻧﺘﺠﺖ ﰲ ﻋﺎﻡ 4991)ﺍﻟﻨﺴﺨﺔ ﺍﻟﻨﻬﺎﺋﻴـﺔ ﺍﻷﻭﱃ( ﻟﻜﻲ ﺗﻜﻮﻥ ﻣﻜﺘﺒﺔ‬ ‫ﻗﻴﺎﺳﻴﺔ ﻳﻌﺘﻤﺪ ﻋﻠﻴﻬﺎ ﻣﻨﺘﺠﻮ ﺍﳊﺎﺳﺒﺎﺕ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺑﺪﻻ ﻣﻦ ﺃﻥ ﻳﻨﺘﺞ ﻛﻞ ﻣﻮﻓﺮ ﺍﳌﻜﺘﺒﺔ ﺍﳋﺎﺻﺔ ﺑـﻪ‬ ‫ﹰ‬ ‫ﻛﻤﺎ ﻛﺎﻥ ﻫﺬﺍ ﳛﺪﺙ ﰲ ﺍﻟﺴﺎﺑﻖ. ﻓﻘﺪ ﻋﺮﻓﺖ ﻫﺬﻩ ﺍﳌﻜﺘﺒﺔ ﻣﻌﺎﻳﲑ ﻗﻴﺎﺳﻴﺔ ﻟﺘﻤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﳝﻜﻦ‬ ‫ّ‬ ‫ﺃﻥ ﺗﺴﺘﺨﺪﻡ ﻟﺘﻄﻮﻳﺮ ﺑﺮﺍﻣﺞ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﻣﻦ ﺧﻼﻝ ﻟﻐﱵ ﺍﻟﱪﳎﺔ ++‪ C/C‬ﺃﻭ ‪ .FORTRAN‬ﻭﻗﺪ‬ ‫ﻃﻮﺭﺕ ﻣﻜﺘﺒﺔ ‪ MPI‬ﺑﻮﺍﺳﻄﺔ ﳎﻤﻮﻋﺔ ﻣﻦ ﺍﻟﻘﻄﺎﻋﲔ ﺍﻷﻛﺎﺩﳝﻲ ﻭﺍﻟﺼﻨﺎﻋﻲ، ﻭﻟﻘﻴﺖ ﺩﻋﻤﺎ ﻭﺍﺳﻌﺎ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﻣﻦ ﻗﺒﻞ ﺃﻏﻠﺐ ﻣﻨﺘﺠﻲ ﺍﻟﻌﺘﺎﺩ.‬ ‫ﲢﺘﻮﻱ ﻣﻜﺘﺒﺔ ‪ MPI‬ﻋﻠﻰ ﺃﻛﺜﺮ ﻣﻦ 521 ﺭﻭﺗﻴﻨﺎ، ﻭﻟﻜﻦ ﳝﻜﻦ ﻛﺘﺎﺑﺔ ﺑﺮﺍﻣﺞ ﻣﺘﻮﺍﺯﻳﺔ ﻛﺎﻣﻠﺔ‬ ‫ﹰ‬ ‫ﺍﻟﻮﻇﺎﺋﻒ ﺑﺎﺳﺘﺨﺪﺍﻡ 6 ﺭﻭﺗﻴﻨﺎﺕ ﻓﻘﻂ ﻭﻫﻲ ﺍﳌﻌﺮﻭﺿﺔ ﰲ ﺍﳉﺪﻭﻝ)1-4( ﻫﺬﻩ ﺍﻟﺮﻭﺗﻴﻨﺎﺕ ﺍﻟﺴﺖ‬ ‫ﺗﺴﺘﺨﺪﻡ ﻟﺒﺪﺀ ﻭﺇﻬﻧﺎﺀ ﺍﳌﻜﺘﺒﺔ، ﻭﺗﺴﺘﺨﺪﻡ ﻹﺣﻀﺎﺭ ﻣﻌﻠﻮﻣﺎﺕ ﻋﻦ ﺑﻴﺌﺔ ﺍﻟﺘﺸﻐﻴﻞ ﺍﳌﺘﻮﺍﺯﻳﺔ ﺍﻟﱵ ﻳﻌﻤﻞ‬ ‫ﻋﻠﻴﻬﺎ ﺍﻟﱪﻧﺎﻣﺞ ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺇﺭﺳﺎﻝ ﻭﺍﺳﺘﻘﺒﺎﻝ ﺍﻟﺮﺳﺎﺋﻞ.‬ ‫ﻭﰲ ﻫﺬﺍ ﺍﻟﻘﺴﻢ ﺳﻨﺘﻌﺮﺽ ﳍﺬﻩ ﺍﻟﺮﻭﺗﻴﻨﺎﺕ ﻭﺷﻲﺀ ﻗﻠﻴﻞ ﻣﻦ ﺍﳌﻔﺎﻫﻴﻢ ﺍﻷﺳﺎﺳﻴﺔ ﻭﺍﻟﻀﺮﻭﺭﻳﺔ ﻟﻜﺘﺎﺑﺔ‬ ‫ﺑﺮﺍﻣﺞ ﺻﺤﻴﺤﺔ ﻭﻓ ّﺎﻟﺔ ﻟﻨﻤﻮﺫﺝ ﲤﺮﻳﺮ ﺍﻟﺮﺳﺎﺋﻞ ﺑﺎﺳﺘﺨﺪﺍﻡ ‪.MPI‬‬ ‫ﻌ‬ ‫اﻟﺠﺪول)1-4(: ﻣﺠﻤﻮﻋﺔ ﻣﻦ اﻟﺮوﺕﻴﻨﺎت اﻷﺳﺎﺳﻴﺔ اﻟﺨﺎﺹﺔ ﺏﺎﻟﻤﻜﺘﺒﺔ ‪MPI‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫111‬ ‫1.3.4 اﻟﻬﻴﻜﻞ اﻟﻌﺎم ﻟﺒﺮاﻣﺞ ‪MPI‬‬ ‫ﻳﻮﺿﺢ ﺍﻟﺸﻜﻞ ﺍﻵﰐ ﺍﳍﻴﻜﻞ ﺍﻟﻌﺎﻡ ﻟﱪﺍﻣﺞ ‪:MPI‬‬ ‫، ‪ MPI‬ﻗﺒﻞ ﺃﻱ ﺍﺳﺘﺪﻋﺎﺀ ﻷﻱ ﺭﻭﺗﲔ ﺁﺧﺮ ﰲ ﺍﳌﻜﺘﺒـﺔ ‪ MPI_Init‬ﻳﺘﻢ ﺍﺳﺘﺪﻋﺎﺀ ﺍﻟﺮﻭﺗﲔ‬ ‫ﻭﻭﻇﻴﻔﺔ ﻫﺬﺍ ﺍﻟﺮﻭﺗﲔ ﻫﻲ ﺗﺸﻐﻴﻞ ﺑﻴﺌﺔ ‪ .MPI‬ﻭﺍﺳﺘﺪﻋﺎﺀ ﻫﺬﺍ ﺍﻟﺮﻭﺗﲔ ﺃﻛﺜﺮ ﻣﻦ ﻣﺮﺓ ﺧﻼﻝ ﻓﺘـﺮﺓ‬ ‫ﻋﻤﻞ ﺍﻟﱪﻧﺎﻣﺞ ﺳﻴﺘﺴﺒﺐ ﰲ ﺣﺪﻭﺙ ﺧﻄﺄ. ﻭﻳﺴﺘﺪﻋﻰ ﺍﻟﺮﻭﺗﲔ ‪ MPI_Finalize‬ﻹﻬﻧﺎﺀ ﺍﳊﺴﺎﺏ.‬ ‫ﻭﳚﺐ ﺃﻥ ﻻ ﻳﺴﺘﺪﻋﻰ ﺃﻱ ﺭﻭﺗﲔ ﰲ ‪ MPI‬ﺑﻌﺪ ﺍﺳﺘﺪﻋﺎﺀ ﺭﻭﺗﲔ ﺍﻹﻬﻧﺎﺀ.‬ ‫ﻭﳚﺐ ﺃﻥ ﻳﺘﻢ ﺍﺳﺘﺪﻋﺎﺀ ﺍﻟﺮﻭﺗﻴﻨﲔ ‪ MPI_init‬ﻭ ‪ MPI_Finalize‬ﺑﻮﺍﺳﻄﺔ ﻛﻞ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ‬ ‫ﻭ ﺇﻻ ﺳﻮﻑ ﻳﻜﻮﻥ ﺳﻠﻮﻙ ﺍﻟﱪﻧﺎﻣﺞ ﻏﲑ ﻃﺒﻴﻌﻲ. ﻭﻓﻴﻤﺎ ﻳﻠﻲ ﻃﺮﻳﻘﺔ ﺍﻻﺳﺘﺪﻋﺎﺀ ﳍﺬﻳﻦ ﺍﻟﺮﻭﺗﻴﻨﲔ‬ ‫ﰲ ﻟﻐﺔ ++‪:C‬‬ ‫)‪int MPI_Init(int *argc, char ***argv‬‬ ‫)(‪int MPI_Finalize‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫211‬ ‫ﺇﻥ ﲨﻴﻊ ﺍﻟﺮﻭﺗﻴﻨﺎﺕ ﻭ ﺃﻧﻮﺍﻉ ﺍﻟﺒﻴﺎﻧﺎﺕ ﻭ ﺍﻟﺜﻮﺍﺑﺖ ﰲ ‪ُ MPI‬ﺗﺴَﺒﻖ ﺑﺎﻷﺣﺮﻑ _‪) MPI‬ﻋﻠﻰ‬ ‫ْ‬ ‫ﺳﺒﻴﻞ ﺍﳌﺜﺎﻝ‪ .(MPI_Init‬ﻭﻫﻨﺎﻙ ﺛﺎﺑﺖ ﻳﺘﻢ ﺇﺭﺟﺎﻋﻪ ﻋﻨﺪﻣﺎ ﺗﺘﻢ ﺍﻟﻌﻤﻠﻴﺔ ﺍﳊﺴﺎﺑﻴﺔ ﺑﻨﺠﺎﺡ ﻭﻫﻮ‬ ‫‪ .MPI_SUCCESS‬ﻫﺬﺍ ﺍﻟﺜﺎﺑﺖ ﻭﻏﲑﻩ ﻣﻦ ﺍﻟﺜﻮﺍﺑﺖ ﻭﺗﺮﺍﻛﻴﺐ ﺍﻟﺒﻴﺎﻧﺎﺕ ﰲ ‪ MPI‬ﻛﻠﻬﺎ ﻣﻌﺮﻓـﺔ‬ ‫ﻟﻠﻐﺔ ++‪ C‬ﰲ ﺍﳌﻠﻒ "‪ ."mpi.h‬ﻭﳚﺐ ﺃﻥ ﻳﺘﻢ ﺗﻀﻤﲔ ﻫﺬﺍ ﺍﳌﻠﻒ ﰲ ﲨﻴﻊ ﺑﺮﺍﻣﺞ ‪.MPI‬‬ ‫ﺳﻨﻜﺘﺐ ﺍﻵﻥ ﺃﻭﻝ ﺑﺮﻧﺎﻣﺞ ﻟﻨﺎ ﺑﺎﺳﺘﺨﺪﺍﻡ ‪ MPI‬ﻭﺫﻟﻚ ﺑﺎﻻﺳﺘﻔﺎﺩﺓ ﻣﻦ ﺍﻟﺮﻭﺗﻴﻨﲔ ﺍﻟﺴﺎﺑﻘﲔ،‬ ‫ﻭﻫﻮ ﻋﺒﺎﺭﺓ ﻋﻦ ﺑﺮﻧﺎﻣﺞ ﺑﺴﻴﻂ ﻭﻓﻴﻪ ﻳﻘﻮﻡ ﻛﻞ ﻣﻌـﺎﰿ ﻟـﺪﻳﻨﺎ ﺑﻄﺒﺎﻋـﺔ ﺍﻟﺮﺳـﺎﻟﺔ ﺍﻟﺘﺮﺣﻴﺒﻴـﺔ‬ ‫"‪ "!Welcome‬ﻣﻊ ﻣﻼﺣﻈﺔ ﺃﻧﻪ ﰲ ﺑﺮﺍﻣﺞ ‪ MPI‬ﻳﻜﻮﻥ ﻟﺪﻯ ﻛﻞ ﻣﻌﺎﰿ ﻣﻦ ﻣﻌﺎﳉﺎﺕ ﺍﳊﺎﺳﺐ‬ ‫ﺍﳌﺘﻮﺍﺯﻱ ﻧﺴﺨﺔ ﺧﺎﺻﺔ ﺑﻪ ﻣﻦ ﻧﻔﺲ ﺍﻟﱪﻧﺎﻣﺞ:‬ ‫>‪#include <iostream.h‬‬ ‫ﻟﺘﻀﻤﲔ اﳌﻜﺘﺒﺔ ﰲ ﺑﺮﻧﺎﳎﻨﺎ //‬ ‫>‪#include <mpi.h‬‬ ‫)‪int main(int argc, char ** argv‬‬ ‫{‬ ‫;)‪MPI_Init( &argc, &argv‬‬ ‫;‪cout << "Welcome!" << endl‬‬ ‫;)(‪MPI_Finalize‬‬ ‫}‬ ‫ﻧﺴﺘﺨﻠﺺ ﻣﻦ ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﺴﺎﺑﻖ ﻋﺪﺓ ﻣﻼﺣﻈﺎﺕ:‬ ‫• ﻗﻤﻨﺎ ﺑﺘﻀﻤﲔ ﺍﳌﻜﺘﺒﺔ ‪ ،mpi.h‬ﻭﻫﺬﺍ ﻳﻌﻄﻴﻨﺎ ﺇﻣﻜﺎﻧﻴﺔ ﺍﻟﺘﻌﺎﻣﻞ ﻣﻊ ﲨﻴﻊ ﻭﻇﺎﺋﻒ ﻣﻜﺘﺒﺔ‬ ‫‪.MPI‬‬ ‫• ﻟﺪﻳﻨﺎ ﰲ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﺑﺪﺍﻳﺔ ﻭﻬﻧﺎﻳﺔ. ﻛﺎﻧﺖ ﺍﻟﺒﺪﺍﻳﺔ ﺣﲔ ﺍﺳﺘﺪﻋﺎﺀ )(‪ ،MPI_Init‬ﻭﺍﻟﱵ‬ ‫ﲣﱪ ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﺑﺄﻥ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﻫﻮ ﺑﺮﻧﺎﻣﺞ ‪ MPI‬ﻭﺣﻴﻨﻬﺎ ﺳﻴﻘﻮﻡ ﻧﻈﺎﻡ ﺍﻟﺘـﺸﻐﻴﻞ ﺑﻌﻤـﻞ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫311‬ ‫ﺍﻻﻋﺪﺍﺩﺍﺕ ﺍﻟﻼﺯﻣﺔ. ﺃﻣﺎ ﺍﻟﻨﻬﺎﻳﺔ ﻓﻜﺎﻧﺖ ﺣﲔ ﺍﺳﺘﺪﻋﺎﺀ ﺍﻟﺮﻭﺗﲔ )(‪ ،MPI_Finalize‬ﻭﺍﻟﱵ ﲣـﱪ‬ ‫ﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﺑﺄﻥ ﻋﻠﻴﻪ ﺗﺪﻣﲑ ﺃﻱ ﺷﻲﺀ ﻳﺘﻌﻠﻖ ﰲ ‪.MPI‬‬ ‫• ﺇﺫﺍ ﻛﺎﻥ ﺍﻟﱪﻧﺎﻣﺞ ﻣﺘﻮﺍﺯﻱ ﲤﺎﻣﺎ -ﻛﻤﺎ ﰲ ﺑﺮﻧﺎﳎﻨﺎ ﻫﺬﺍ-، ﻓﺈﻥ ﺍﻟﻌﻤﻠﺎﺕ ﺍﻟﱵ ﲢﺪﺙ‬ ‫ﺑﲔ ﻋﺒﺎﺭﰐ ﺍﻟﺘﻬﻴﺌﺔ ﻭﺍﻹﻬﻧﺎﺀ ﻻ َﺗﺴﺘﺨﺪﻡ ﺃﻱ ﺍﺗﺼﺎﻻﺕ.‬ ‫ِ‬ ‫ﻋﻨﺪﻣﺎ ﻧﻘﻮﻡ ﺑﺘﺮﲨﺔ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﻭﺗﺸﻐﻴﻠﻪ ﻓﺴﻮﻑ ﳓﺼﻞ ﻋﻠﻰ ﳎﻤﻮﻋـﺔ ﻣـﻦ ﺭﺳـﺎﺋﻞ‬ ‫"!‪ "Welcome‬ﻣﻄﺒﻮﻋﺔ ﻋﻠﻰ ﺍﻟﺸﺎﺷﺔ، ﻭﻋﺪﺩ ﻫﺬﻩ ﺍﻟﺮﺳﺎﺋﻞ ﺳﻴﻜﻮﻥ ﻣﺴﺎﻭﻳﺎ ﻟﻌﺪﺩ ﺍﳌﻌﺎﳉﺎﺕ ﺍﻟﱵ‬ ‫ﹰ‬ ‫ﹰ‬ ‫ﰲ ﺍﳊﺎﺳﺐ ﺍﳌﺘﻮﺍﺯﻱ ﺍﻟﺬﻱ ﰎ ﺗﺸﻐﻴﻞ ﺍﻟﱪﻧﺎﻣﺞ ﻋﻠﻴﻪ، ﻭﲡﺪﺭ ﺍﻹﺷﺎﺭﺓ ﺇﱃ ﺃﻧﻪ ﺑﺴﺒﺐ ﻛﹶﻮﻥ ﻋﻤﻠﻴﺔ‬ ‫ﻃﺒﺎﻋﺔ ﺍﻟﺮﺳﺎﺋﻞ ﺳﺘﻜﻮﻥ ﻣﺘﺰﺍﻣﻨﺔ، ﻓﺈﻥ ﺍﳊﺎﺳﺐ ﳚﺐ ﺃﻥ ُﻳﺴﻠﺴﻞ ﺍﳋﺮﺝ ﺑﺘﺘﺎﺑﻊ ﻟﻴﺘﻢ ﺇﺧﺮﺍﺟﻬـﺎ‬ ‫ﻋﻠﻰ ﺍﻟﺸﺎﺷﺔ.‬ ‫2.3.4 اﻟﻤﺮا ِﻼت‬ ‫ﺳ‬ ‫)‪(Communicators‬‬ ‫ﺃﺣﺪ ﺍﻷﺷﻴﺎﺀ ﺍﻟﺮﺋﻴﺴﻴﺔ ﺍﻟﱵ ﺗﺴﺘﻌﻤﻞ ﰲ ﲨﻴﻊ ﺑﺮﺍﻣﺞ ‪ MPI‬ﺍﳊﻘﻴﻘﺔ ﻫﻮ ﻣﺎ ﻳﻄﻠﻖ ﻋﻠﻴﻪ ﳎـﺎﻝ‬ ‫ﺍﻻﺗﺼﺎﻝ )‪ .(communication domain‬ﻭﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ ﻫﻮ ﳎﻤﻮﻋﺔ ﻣﻦ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺗـﺴﻤﺢ‬ ‫ﲝﺪﻭﺙ ﺍﺗﺼﺎﻝ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ. ﻭﺑﻌﺾ ﺍﳌﻌﻠﻮﻣﺎﺕ ﺣﻮﻝ ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ ﺗﻜﻮﻥ ﳐﺰﻧﺔ ﰲ ﻣﺘﻐﲑﺍﺕ ﻣﻦ‬ ‫ﻧﻮﻉ ‪ ،MPI_Comm‬ﻭﺍﻟﱵ ﺗﺪﻋﻰ ﺑﺎﳌﺮﺍﺳﻼﺕ. ﻭﻫﺬﻩ ﺍﳌﺮﺍﺳﻼﺕ ُﺗﺴَﺘﺨﺪﻡ ﻛﺒﺎﺭﺍﻣﺘﺮﺍﺕ ﳉﻤﻴـﻊ‬ ‫ﺭﻭﺗﻴﻨﺎﺕ ﻧﻘﻞ ﺍﻟﺮﺳﺎﺋﻞ ﰲ ‪ ،MPI‬ﻻﺣﻆ ﺃﻥ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﳝﻜﻦ ﺃﻥ ﺗﻨﺘﻤﻲ ﺇﱃ ﳎـﺎﻻﺕ ﺗﺮﺍﺳـﻞ‬ ‫ﳐﺘﻠﻔﺔ.‬ ‫ﺗﺴﺘﺨﺪﻡ ﺍﳌﺮﺍﺳﻼﺕ ﻟﺘﻌﺮﻳﻒ ﳎﻤﻮﻋﺔ ﻣﻦ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﳝﻜﻦ ﺃﻥ ﺗﺘﺼﻞ ﻓﻴﻤﺎ ﺑﻴﻨﻬﺎ. ﻭﻫـﺬﻩ‬ ‫ﺍﺠﻤﻟﻤﻮﻋﺔ ﻣﻦ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺗﺸﻜﻞ ﳎﺎﻝ ﺗﺮﺍﺳﻞ. ﻭﺑﺸﻜﻞ ﻋﺎﻡ ﻗﺪ ﲢﺘـﺎﺝ ﲨﻴـﻊ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ‬ ‫ﻟﻼﺗﺼﺎﻝ ﻣﻊ ﺑﻌﻀﻬﺎ ﺍﻟﺒﻌﺾ، ﻭﳍﺬﺍ ﺍﻟﺴﺒﺐ ﻓﺈﻥ ‪ُ MPI‬ﺗﻌﺮﻑ ﻣﺮﺍﺳـﻼﺕ ﺍﻓﺘﺮﺍﺿـﻴﺔ ﺗـﺪﻋﻰ‬ ‫ﱢ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫411‬ ‫‪ MPI_COMM_WORLD‬ﻭﺍﻟﱵ ﺗﺘﻀﻤﻦ ﲨﻴﻊ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﳌﺴﺘﺨﺪﻣﺔ ﻟﻠﺘﻨﻔﻴﺬ ﺍﳌﺘﻮﺍﺯﻱ. ﻭﻟﻜﻦ‬ ‫ﰲ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﳊﺎﻻﺕ ﻗﺪ ﳓﺘﺎﺝ ﺃﺩﺍﺀ ﺍﺗﺼﺎﻝ ﻓﻘﻂ ﺿﻤﻦ ﳎﻤﻮﻋﺔ ﻣﻌﻴﻨـﺔ ﻣـﻦ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ.‬ ‫ﻭﺑﺎﺳﺘﻌﻤﺎﻝ ﻣﺮﺍﺳﻼﺕ ﳐﺘﻠﻔﺔ ﻟﻜﻞ ﳎﻤﻮﻋﺔ ﳝﻜﻦ ﻟﻨﺎ ﺃﻥ ﻧﻀﻤﻦ ﺃﻥ ﺍﻟﺮﺳﺎﺋﻞ ﻻ ﺗﺘﺪﺍﺧﻞ ﺃﺑﺪﺍ ﻣﻊ‬ ‫ﺭﺳﺎﺋﻞ ﳎﻤﻮﻋﺔ ﺃﺧﺮﻯ.‬ ‫3.3.4 اﻟﺤﺼﻮل ﻋﻠﻰ ﻣﻌﻠﻮﻣﺎت ﻋﻦ ﺏﻴﺌﺔ اﻟﺘﺸﻐﻴﻞ‬ ‫ﺇﻥ ﺍﻟﺪﺍﻟﺘﲔ ‪ MPI_Comm_size‬ﻭ ‪ُ MPI_Comm_rank‬ﺗﺴﺘﺨﺪﻣﺎﻥ ﻟﻠﺤـﺼﻮﻝ ﻋﻠـﻰ‬ ‫َ‬ ‫ﻣﻌﻠﻮﻣﺎﺕ ﻋﻦ ﺍﻟﺒﻴﺌﺔ ﺍﻟﱵ ﻳﻌﻤﻞ ﻓﻴﻬﺎ ﺍﻟﱪﻧﺎﻣﺞ؛ ﻓﺎﻷﻭﱃ ﺗﺴﺘﺨﺪﻡ ﻟﺘﺤﺪﻳـﺪ ﻋـﺪﺩ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ‬ ‫ﻭﺍﻟﺜﺎﻧﻴﺔ ﻟﺘﺤﺪﻳﺪ ﻋﻨﻮﺍﻥ ﺃﻭ ﺭﺗﺒﺔ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﳌﺴﺘﺪﻋﻴﺔ، ﻭﻳﺄﺧﺬ ﺍﻟﺮﻭﺗﻴﻨﺎﻥ ﺍﻟﺸﻜﻞ:‬ ‫)‪int MPI_Comm_size(MPI_Comm comm, int *size‬‬ ‫)‪int MPI_Comm_rank(MPI_Comm comm, int *rank‬‬ ‫ﺇﻥ ﺍﻟﺪﺍﻟﺔ ‪ُ MPI_Comm_size‬ﺗﺮﺟﻊ ﰲ ﺍﳌﺘﻐﲑ ‪ Size‬ﻋﺪﺩ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﺍﻟﱵ ﺗﻨﺘـﺴﺐ ﺠﻤﻟـﺎﻝ‬ ‫ُِ‬ ‫ﺍﻻﺗﺼﺎﻝ ‪.comm‬‬ ‫ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﺗﺘﺒﻊ ﺠﻤﻟﺎﻝ ﺍﻻﺗﺼﺎﻝ ُﻌﺮﻑ ﺑﻮﺍﺳﻄﺔ ﺭﺗﺒﺘﻬﺎ ‪ .rank‬ﻭﺭﺗﺒﺔ ﺍﻹﺟﺮﺍﺋﻴﺔ ﻫﻲ ﻋـﺪﺩ‬ ‫ﺗّ‬ ‫ﺻﺤﻴﺢ ﻳﺘﺮﺍﻭﺡ ﻣﻦ ﺻﻔﺮ ﺇﱃ ﺣﺠﻢ ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ ﻧﺎﻗﺺ ﻭﺍﺣﺪ. ﻭﳝﻜﻦ ﻣﻌﺮﻓﺔ ﺭﺗﺒﺔ ﺍﻹﺟﺮﺍﺋﻴـﺔ‬ ‫ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﺪﺍﻟﺔ ‪ MPI_Comm_rank‬ﻭﺍﻟﱵ ﺗﺄﺧﺬ ﺑﺎﺭﺍﻣﺘﺮﻳﻦ: ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ، ﻭ ﻣﺘﻐﲑ ﺻـﺤﻴﺢ‬ ‫‪ .rank‬ﺍﳌﺘﻐﲑ ‪ rank‬ﳜﺰﻥ ﺭﺗﺒﺔ ﺍﻹﺟﺮﺍﺋﻴﺔ. ﻭﳚﺐ ﻋﻠﻰ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﻟﱵ ﺗﺴﺘﺪﻋﻲ ﺃﻱ ﻣـﻦ ﻫـﺬﻩ‬ ‫ﺍﻟﺪﻭﺍﻝ ﺃﻥ ﺗﻜﻮﻥ ﺗﺎﺑﻌﺔ ﺠﻤﻟﺎﻝ ﺍﻻﺗﺼﺎﻝ، ﻭ ﺇﻻ ﺳﻮﻑ ﳛﺪﺙ ﺧﻄﺄ.‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫511‬ ‫ﻛﻨﺎ ﻗﺪ ﻛﺘﺒﻨﺎ ﺃﻭﻝ ﺑﺮﻧﺎﻣﺞ ﻣﺘﻮﺍﺯﻱ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺩﺍﻟﱵ ﺍﻟﺒﺪﺀ ﻭﺍﻹﻬﻧﺎﺀ ﻓﻘﻂ، ﻭﺍﻵﻥ ﺳﻮﻑ ﻧﻘﻮﻡ‬ ‫ﺑﺘﺤﺴﲔ ﺍﻟﱪﻧﺎﻣﺞ ﻓﺒﺪﻻ ﻣﻦ ﺃﻥ ﻳﺘﻢ ﻃﺒﺎﻋﺔ ﺭﺳﺎﺋﻞ ﺗﺮﺣﻴﺐ ﻓﻘﻂ ﺩﻭﻥ ﻣﻌﺮﻓﺔ ﻣﺼﺪﺭﻫﺎ، ﺳـﻴﺘﻢ‬ ‫ﹰ‬ ‫ﺍﻵﻥ ﻛﺘﺎﺑﺔ ﺑﺮﻧﺎﻣﺞ ﺗﻘﻮﻡ ﻓﻴﻪ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﺑﻄﺒﺎﻋﺔ ﺭﺳﺎﻟﺔ ﺗﺮﺣﻴﺒﻴﺔ ﻭﺗﻮﺿﺢ ﲜﻮﺍﺭﻫﺎ ﺃﻬﻧـﺎ ﻫـﻲ‬ ‫ﺍﻹﺟﺮﺍﺋﻴﺔ ﺍﻟﱵ ﻃﺒﻌﺖ ﻫﺬﻩ ﺍﻟﺮﺳﺎﻟﺔ.‬ ‫>‪#include <iostream.h‬‬ ‫>‪#include <mpi.h‬‬ ‫)][‪main(int argc, char *argv‬‬ ‫{‬ ‫;‪int npes, myrank‬‬ ‫;)‪MPI_Init(&argc, &argv‬‬ ‫;)‪MPI_Comm_size(MPI_COMM_WORLD, &npes‬‬ ‫;)‪MPI_Comm_rank(MPI_COMM_WORLD, &myrank‬‬ ‫;‪cout <<"Welcome! from process "<<myrank‬‬ ‫;‪cout <<"of "<<npes <<endl‬‬ ‫;)(‪MPI_Finalize‬‬ ‫}‬ ‫ﻋﻨﺪ ﺗﺸﻐﻴﻞ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﻋﻠﻰ ﺣﺎﺳﺐ ﻟﺪﻳﻪ ﺃﺭﺑﻌﺔ ﻣﻌﺎﳉﺎﺕ، ﻓﺈﻥ ﻧﺎﺗﺞ ﺍﻟﻄﺒﺎﻋـﺔ ﺳـﻴﻜﻮﻥ‬ ‫ﺷﺒﻴﻪ ﻟﻠﺘﺎﱄ:‬ ‫4 ‪from process 0 of‬‬ ‫4 ‪from process 2 of‬‬ ‫4 ‪from process 3 of‬‬ ‫4 ‪from process 1 of‬‬ ‫!‪Welcome‬‬ ‫!‪Welcome‬‬ ‫!‪Welcome‬‬ ‫!‪Welcome‬‬ ‫ﻻﺣﻆ ﺃﻥ ﺍﻟﻨﺎﺗﺞ ﺭﲟﺎ ﻟﻦ ﻳﻜﻮﻥ ﻣﺮﺗﺒﺎ ﺑﺸﻜﻞ ﺻﺤﻴﺢ ﻭﺫﻟﻚ ﻷﻥ ﲨﻴﻊ ﺍﳌﻌﺎﳉـﺎﺕ ﲢـﺎﻭﻝ‬ ‫ﺍﻟﻄﺒﺎﻋﺔ ﻋﻠﻰ ﺍﻟﺸﺎﺷﺔ ﺑﻨﻔﺲ ﺍﻟﻮﻗﺖ، ﻭﻧﻈﺎﻡ ﺍﻟﺘﺸﻐﻴﻞ ﻫﻮ ﺍﻟﺬﻱ ﻳﻘﺮﺭ ﺍﻟﺘﺮﺗﻴﺐ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫611‬ ‫4.3.4 ﺕﺮاﺳﻞ اﻟﺒﻴﺎﻥﺎت ﻓﻲ‬ ‫‪MPI‬‬ ‫ﳝﻜﻦ ﺃﻥ ﻳﺘﻢ ﺇﺭﺳﺎﻝ ﺍﻟﺮﺳﺎﺋﻞ ﻭﺍﺳـﺘﻘﺒﺎﳍﺎ ﰲ ‪ MPI‬ﺑﺎﺳـﺘﺨﺪﺍﻡ ﺍﻟـﺪﺍﻟﺘﲔ ﺍﻟﺘـﺎﻟﻴﺘﲔ:‬ ‫‪ MPI_Send‬ﻟﻺﺭﺳﺎﻝ ﻭ ‪ MPI_Recv‬ﻟﻼﺳﺘﻘﺒﺎﻝ. ﻭﻓﻴﻤﺎ ﻳﻠﻲ ﺳﻨﻮﺿﺢ ﺍﻟﺘﺮﻛﻴﺐ ﺍﻟﻨﺤﻮﻱ ﻟﻜﻼ‬ ‫ﺍﻟﺪﺍﻟﺘﲔ، ﻣﻊ ﺍﻟﺸﺮﺡ ﻟﻠﺒﺎﺭﺍﻣﺘﺮﺍﺕ، ﻭﻣﺜﺎﻝ ﻟﻼﺳﺘﺨﺪﺍﻡ.‬ ‫ﺍﻟﺘﺮﻛﻴﺐ ﺍﻟﻨﺤﻮﻱ ﻻﺳﺪﻋﺎﺀ ﺍﻟﺪﺍﻟﺔ:‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫/*‪/*in‬‬ ‫‪message‬‬ ‫‪count‬‬ ‫‪datatype‬‬ ‫‪dest‬‬ ‫‪tag‬‬ ‫‪comm‬‬ ‫(‪int MPI_Send‬‬ ‫*‪void‬‬ ‫‪int‬‬ ‫‪MPI_Datatype‬‬ ‫‪int‬‬ ‫‪int‬‬ ‫‪MPI_Comm‬‬ ‫}‬ ‫,/*‪/*out‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫,/*‪/*in‬‬ ‫/*‪/*out‬‬ ‫‪message‬‬ ‫‪count‬‬ ‫‪datatype‬‬ ‫‪source‬‬ ‫‪tag‬‬ ‫‪comm‬‬ ‫‪status‬‬ ‫(‪int MPI_Recv‬‬ ‫*‪void‬‬ ‫‪int‬‬ ‫‪MPI_Datatype‬‬ ‫‪int‬‬ ‫‪int‬‬ ‫‪MPI_Comm‬‬ ‫*‪MPI _Status‬‬ ‫)‬ ‫ﺗﻮﺻﻴﻒ ﺍﻟﺒﺎﺭﺍﻣﺘﺮﺍﺕ:‬ ‫• ‪ - message‬ﻋﻨﻮﺍﻥ ﺍﻟﺒﺪﺍﻳﺔ ﻟﻌﺎﺯﻝ )‪ (buffer‬ﺍﻹﺭﺳﺎﻝ ﻭﺍﻻﺳﺘﻘﺒﺎﻝ.‬ ‫• ‪ - count‬ﻋﺪﺩ ﺍﻟﻌﻨﺎﺻﺮ ﰲ ﻋﺎﺯﻝ ﺍﻹﺭﺳﺎﻝ ﻭ ﺍﻻﺳﺘﻘﺒﺎﻝ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫711‬ ‫• ‪ - datatype‬ﺃﻧﻮﺍﻉ ﺍﳌﻌﻄﻴﺎﺕ ﻟﻠﻌﻨﺎﺻﺮ ﺍﻟﱵ ﰲ ﻋﺎﺯﻝ ﺍﻹﺭﺳﺎﻝ.‬ ‫• ‪ - source‬ﺍﻟﺮﺗﺒﺔ )‪ (rank‬ﻟﻺﺟﺮﺍﺋﻴﺔ ﻹﺭﺳﺎﻝ ﺍﳌﻌﻄﻴﺎﺕ.‬ ‫• ‪ - dest‬ﺍﻟﺮﺗﺒﺔ ﻟﻺﺟﺮﺍﺋﻴﺔ ﻻﺳﺘﻘﺒﺎﻝ ﺍﳌﻌﻄﻴﺎﺕ.‬ ‫• ‪ - tag‬ﻟﻘﺐ ﺍﻟﺮﺳﺎﻟﺔ.‬ ‫• ‪ - comm‬ﳛﺪﺩ ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ.‬ ‫• ‪ - status‬ﻛﺎﺋﻦ ﺍﳊﺎﻟﺔ.‬ ‫ﺗﺮﺳﻞ ﺍﻟﺪﺍﻟﺔ ‪ MPI_Send‬ﺍﳌﻌﻄﻴﺎﺕ ﺍﳌﻮﺟﻮﺩﺓ ﰲ ﺍﻟﻌﺎﺯﻝ )‪ (buffer‬ﺍﳌﺸﺎﺭ ﺇﻟﻴﻬﺎ ﺑﻮﺍﺳـﻄﺔ‬ ‫‪ .buf‬ﻳﺘﺄﻟﻒ ﻫﺬﺍ ﺍﻟﻌﺎﺯﻝ ﻣﻦ ﻋﻨﺎﺻﺮ ﻣﺘﺘﺎﻟﻴﺔ ﻣﻦ ﺍﻟﻨﻮﻉ ﺍﶈﺪﺩ ﺑﻮﺍﺳﻄﺔ ﺍﻟﺒﺎﺭﺍﻣﺘﺮ ‪ .datatype‬ﻭﻋﺪﺩ‬ ‫ﺍﻟﻌﻨﺎﺻﺮ ﺍﻟﱵ ﰲ ﺍﻟﻌﺎﺯﻝ ﳛﺪﺩﻫﺎ ﺍﻟﺒﺎﺭﺍﻣﺘﺮ ‪ .count‬ﻭﺟﻬﺔ ﺍﻟﺮﺳﺎﺋﻞ ﺍﳌﺮﺳﻠﺔ ﺑﻮﺍﺳﻄﺔ ‪MPI_Send‬‬ ‫ﲢﺪﺩ ﺑﻮﺍﺳﻄﺔ ﺍﻟﺒﺎﺭﺍﻣﺘﺮﻳﻦ ‪ dest‬ﻭ ‪ .comm‬ﺍﻟﺒﺎﺭﺍﻣﺘﺮ ‪ dest‬ﻫﻮ ﺭﺗﺒﺔ )‪ (rank‬ﻟﻺﺟﺮﺍﺋﻴﺔ ﺍﳍـﺪﻑ‬ ‫ﺍﻟﱵ ﺗﻮﺟﺪ ﰲ ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ ﺍﻟﺬﻱ ﳛﺪﺩ ﺑﻮﺍﺳﻄﺔ ﺍﻟﺮﺍﺳﻞ ‪ .comm‬ﻭﻛﻞ ﺭﺳﺎﻟﺔ ﻳﺮﺍﻓﻘﻬﺎ ﻟﻘـﺐ‬ ‫‪ tag‬ﻭﻫﻮ ﻣﻦ ﻧﻮﻉ ﺻﺤﻴﺢ ﻳﺴﺘﺨﺪﻡ ﻟﻠﺘﻤﻴﻴﺰ ﺑﲔ ﺃﻧﻮﺍﻉ ﺍﻟﺮﺳﺎﺋﻞ ﺍﳌﺨﺘﻠﻔﺔ. ﳝﻜﻦ ﺃﻥ ﻳﺄﺧﺬ ﺍﻟﻠﻘﺐ‬ ‫‪ tag‬ﻗﻴﻢ ﺗﺘﺮﺍﻭﺡ ﻣﻦ ﺻﻔﺮ ﻭﺣﱴ ﺍﳊﺪ ﺍﻷﻋﻠﻰ ﺍﳌﻌﺮﻑ ﺑﻮﺍﺳﻄﺔ ‪ MPI‬ﻭﻫﻮ ‪.MPI_TAG_UP‬‬ ‫أﻥﻮاع اﻟﺒﻴﺎﻥﺎت ﻓﻲ ‪:MPI‬‬ ‫ﺇﻥ ﺃﻧﻮﺍﻉ ﺍﻟﺒﻴﺎﻧﺎﺕ ﰲ ‪ MPI‬ﺍﻟﱵ ﺗﻄﺎﺑﻖ ﺗﻠﻚ ﺍﻷﻧﻮﺍﻉ ﺍﳌﻮﺟﻮﺩﺓ ﰲ ﻟﻐـﺔ ++‪ C‬ﻣﻮﺿـﺤﺔ ﰲ‬ ‫ﺍﳉﺪﻭﻝ)2-4(. ﻓﻠﻜﻞ ﻧﻮﻉ ﰲ ﺳﻲ ﻫﻨﺎﻙ ﻧﻮﻉ ﻣﺴﺎﻭﻱ ﻟﻪ ﰲ ‪ ،MPI‬ﻫﺬﺍ ﺑﺎﻹﺿﺎﻓﺔ ﺇﱃ ﺃﻧـﻮﺍﻉ‬ ‫ﺃﺧﺮﻯ ﻏﲑ ﻣﻮﺟﻮﺩﺓ ﰲ ﻟﻐﺔ ﺳﻲ ﻭﻫﻲ ‪ MPI_BYTE‬ﻭ ‪.MPI_PACKED‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫811‬ ‫اﻟﺠﺪول)2-4(: ﻳﻮﺿﺢ أﻥﻮاع اﻟﺒﻴﺎﻥﺎت ﻓﻲ ++‪ ،C‬وﻳﻮﺿﺢ ﻣﺎ ﻳﻘﺎﺏﻠﻬﺎ ﻓﻲ ‪.MPI‬‬ ‫ﺃﻣﺎ ﺍﻟﺍﻟﺔ ‪ MPI_Recv‬ﻓﺘﺴﺘﻘﺒﻞ ﺍﻟﺮﺳﺎﺋﻞ ﺍﳌﺮﺳﻠﺔ ﺑﻮﺍﺳﻄﺔ ﺇﺟﺮﺍﺋﻴﺔ ﳍﺎ ﺍﻟﺮﺗﺒﺔ ﺍﻟﱵ ﳛﻤﻠـﻬﺎ‬ ‫ﺍﳌﺘﻐﲑ ‪ source‬ﻭﻫﺬﻩ ﺍﻹﺟﺮﺍﺋﻴﺔ ﻣﻮﺟﻮﺩﺓ ﰲ ﳎﺎﻝ ﺍﻻﺗﺼﺎﻝ ﺍﻟﺬﻱ ﳛﺪﺩﻩ ﺍﻟﺒﺎﺭﺍﻣﺘﺮ ‪ .comm‬ﳚﺐ‬ ‫ﺃﻥ ﻳﻜﻮﻥ ﻟﻘﺐ ﺍﻟﺮﺳﺎﻟﺔ ﺍﳌﺮﺳﻠﺔ ﳏﺪﺩﹰﺍ ﺑﻮﺍﺳﻄﺔ ﺍﻟﺒﺎﺭﺍﻣﺘﺮ ‪ ،tag‬ﻓﺈﺫﺍ ﻛﺎﻥ ﻫﻨﺎﻙ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﺮﺳﺎﺋﻞ‬ ‫ﹶ‬ ‫ﳍﺎ ﻧﻔﺲ ﺍﻟﻠﻘﺐ ﻣﻦ ﻧﻔﺲ ﺍﻹﺟﺮﺍﺋﻴﺔ، ﻓﺈﻧﻪ ﻳﺘﻢ ﺍﺳﺘﻘﺒﺎﻝ ﺃﻱ ﻭﺍﺣﺪﺓ ﻣﻦ ﻫﺬﻩ ﺍﻟﺮﺳﺎﺋﻞ. ﺗـﺴﻤﺢ‬ ‫‪ MPI‬ﺑﺘﺤﺪﻳﺪ ﺭﻣﺰ ﻋﺎﻡ ﻟﻠﺒﺎﺭﺍﻣﺘﺮﺍﺕ ﺳﻮﺍﺀ ﺑﺎﺭﺍﻣﺘﺮ ﺍﳌﺼﺪﺭ ‪ source‬ﺃﻭ ﺍﻟﻠﻘﺐ ‪ ،tag‬ﻓﺈﺫﺍ ﻛـﺎﻥ‬ ‫ﺍﳌﺼﺪﺭ ‪ source‬ﻣﻬﻴﺌﺎ ﺑﺎﻟﻘﻴﻤﺔ ‪ MPI_ANY_SOURCE‬ﻓﺈﻥ ﺃﻱ ﺇﺟﺮﺍﺋﻴﺔ ﰲ ﳎـﺎﻝ ﺍﻻﺗـﺼﺎﻝ‬ ‫ﹰ‬ ‫ﳝﻜﻦ ﺃﻥ ﺗﻜﻮﻥ ﺍﳌﺼﺪﺭ ﻟﻠﺮﺳﺎﻟﺔ. ﻭﺑﻨﻔﺲ ﺍﻟﻄﺮﻳﻘـﺔ، ﺇﺫﺍ ﻛـﺎﻥ ﺍﻟﻠﻘـﺐ ‪ tag‬ﻣﻬﻴـﺄ ﺑﺎﻟﻘﻴﻤـﺔ‬ ‫‪ MPI_ANY_TAG‬ﻓﺈﻥ ﺍﻟﺮﺳﺎﺋﻞ ﻳﺘﻢ ﻗﺒﻮﳍﺎ ﲨﻴﻌﺎ ﺑﺄﻱ ﻟﻘﺐ. ﻳﺘﻢ ﲣﺰﻳﻦ ﺍﻟﺮﺳﺎﻟﺔ ﺍﳌـﺴﺘﻘﺒﻠﺔ ﰲ‬ ‫ﹰ‬ ‫ﺍﳌﻜﺎﻥ ﺍﳌﺸﺎﺭ ﺇﻟﻴﻪ ﻣﻦ ﻗﺒﻞ ‪ .buf‬ﻳﺴﺘﺨﺪﻡ ﺑﺎﺭﺍﻣﺘﺮﻱ ‪ count‬ﻭ ‪ datatype‬ﺍﻟﻠـﺬﺍﻥ ﰲ ﺍﻟـﺮﻭﺗﲔ‬ ‫‪ MPI_Recv‬ﻳﺴﺘﺨﺪﻣﺎﻥ ﻟﺘﺤﺪﻳﺪ ﻃﻮﻝ ﺍﻟﻌﺎﺯﻝ ﺍﺠﻤﻟﻬﺰ. ﻭﳚﺐ ﺃﻥ ﺗﻜﻮﻥ ﺍﻟﺮﺳﺎﻟﺔ ﺍﳌﺴﺘﻘﺒﻠﺔ ﺑﻄﻮﻝ‬ ‫ﻣﺴﺎﻭﻱ ﺃﻭ ﺃﻗﻞ ﻣﻦ ﻫﺬﺍ ﺍﻟﻄﻮﻝ. ﻫﺬﺍ ﳚﻌﻞ ﺍﳌﻌﺎﰿ ﺍﳌﺴﺘﻘﺒﻞ ﻻ ﻳﻌﺮﻑ ﺍﳊﺠﻢ ﺍﻟﺪﻗﻴﻖ ﻟﻠﺮﺳـﺎﻟﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫911‬ ‫ﺍﳌﺮﺳﻠﺔ. ﺇﺫﺍ ﻛﺎﻧﺖ ﺍﻟﺮﺳﺎﻟﺔ ﺍﳌﺴﺘﻘﺒﻠﺔ ﺃﻛﱪ ﻣﻦ ﺍﻟﻌﺎﺯﻝ ﺍﺠﻤﻟﻬﺰ، ﻓﺴﻴﻨﺘﺞ ﺍﳋﻄـﺄ ﺑﺘﺠـﺎﻭﺯ ﺍﳊـﺪ‬ ‫ﺍﳌﺴﻤﻮﺡ، ﻭﺳﻴﻌﻴﺪ ﺍﻟﺮﻭﺗﲔ ﺑﺎﳋﻄﺄ ‪.MPI_ERR_TRUNCATE‬‬ ‫ﺑﻌﺪ ﺃﻥ ﺗﺴﺘﻘﺒﻞ ﺍﻟﺮﺳﺎﻟﺔ، ﻓﺈﻧﻪ ﳝﻜﻦ ﺍﺳﺘﺨﺪﺍﻡ ﺍﳌﺘﻐﲑ ‪ status‬ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﻣﻌﻠﻮﻣﺎﺕ ﺣـﻮﻝ‬ ‫ﻋﻤﻠﻴﺔ ﺍﻹﺭﺳﺎﻝ. ﻭﰲ ﻟﻐﺔ ++‪ C‬ﻳﺘﻢ ﲣﺰﻳﻦ ﻣﺘﻐﲑ ﺍﳊﺎﻟﺔ ‪ status‬ﰲ ﺍﻟﺘﺮﻛﻴﺐ .‪ MPI_Status‬ﻭﻓﻴﻤﺎ‬ ‫ﻳﻠﻲ ﺗﻄﺒﻴﻖ ﳍﺎ ﻛﺘﺮﻛﻴﺐ ﺑﺜﻼﺛﺔ ﺣﻘﻮﻝ:‬ ‫{ ‪typedef struct MPI_Status‬‬ ‫;‪int MPI_SOURCE‬‬ ‫;‪int MPI_TAG‬‬ ‫;‪int MPI_ERROR‬‬ ‫;}‬ ‫ﰲ ﺍﳌﺘﻐﲑﻳﻦ ‪ MPI_SOURCE‬ﻭ ‪ MPI_TAG‬ﻳﺘﻢ ﲣﺰﻳﻦ ﺍﳌـﺼﺪﺭ ﻭﺍﻟﻠﻘـﺐ ﻟﻠﺮﺳـﺎﻟﺔ‬ ‫ـﺴﺘﻘﺒﻠﺔ. ـﺎ ـﺪﺍﻥ ﰲ ـﺔ ـﺘﺨﺪﺍﻡ ﺃﻱ ـﻦ ‪ MPI_ANY_SOURCE‬ﻭ‬ ‫ﻣـ‬ ‫ﺣﺎﻟـ ﺍﺳـ‬ ‫ﻭﳘـ ﻣﻔﻴـ‬ ‫ﺍﳌـ‬ ‫‪ .MPI_ANY_TAG‬ﻭﰲ ﺍﳌﺘﻐﲑ ‪ MPI_ERROR‬ﻳﺘﻢ ﲣﺰﻳﻦ ﺭﻗﻢ ﺍﳋﻄﺄ ﻟﻠﺮﺳﺎﻟﺔ ﺍﳌﺴﺘﻘﺒﻠﺔ.‬ ‫ﺃﻳﻀﺎ ﻓﺈﻥ ﺑﺎﺭﺍﻣﺘﺮ ﺍﳊﺎﻟﺔ ‪ status‬ﻳﺮﺟﻊ ﲟﻌﻠﻮﻣﺎﺕ ﻋﻦ ﻃﻮﻝ ﺍﻟﺮﺳﺎﻟﺔ ﺍﳌﺴﺘﻘﺒﻠﺔ. ﻭﻟﻜﻦ ﻫـﺬﻩ‬ ‫ﺍﳌﻌﻠﻮﻣﺎﺕ ﻻ ﳝﻜﻦ ﺍﳊﺼﻮﻝ ﻋﻠﻴﻬﺎ ﺒﺎﺷﺮﺓ ﻣﻦ ﺍﳌﺘﻐﲑ ‪ status‬ﺑﻞ ﳝﻜـﻦ ﺃﻥ ﳓـﺼﻞ ﻋﻠﻴﻬـﺎ‬ ‫ﺑﺎﺳﺘﺪﻋﺎﺀ ﺍﻟﺪﺍﻟﺔ ‪ MPI_Get_count‬ﻭﺍﻟﱵ ﺗﺄﺧﺬ ﺍﻟﺸﻜﻞ ﺍﻟﺘﺎﱄ:‬ ‫,‪int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype‬‬ ‫)‪int *count‬‬ ‫ﻛﻤﺎ ﻧﺸﺎﻫﺪ ﻓﺈﻥ ﻫﺬﻩ ﺍﻟﺪﺍﻟﺔ ﺗﺄﺧﺬ ﻛﺒﺎﺭﺍﻣﺘﺮﺍﺕ ﻣﺘﻐﲑﻱ ﺍﳊﺎﻟﺔ ‪ status‬ﻭﻧـﻮﻉ ﺍﻟﺒﻴﺎﻧـﺎﺕ‬ ‫‪ datatype‬ﻭﺍﻟﱵ ﺣﺼﻠﻨﺎ ﻋﻠﻴﻬﻤﺎ ﻣﻦ ﺍﻟﺪﺍﻟﺔ ‪ ،MPI_Recv‬ﺃﻣﺎ ﺍﻟﺸﻲﺀ ﺍﻟﺬﻱ ﺗﺮﺟﻌﻪ ﻟﻨـﺎ ﺍﻟﺪﺍﻟـﺔ‬ ‫‪ MPI_Get_count‬ﻓﻬﻲ ﺗﻀﻊ ﰲ ﺍﳌﺘﻐﲑ ‪ count‬ﻋﺪﺩ ﺍﻟﻌﻨﺎﺻﺮ.‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 120 :‫ﻭﻃﺮﻳﻘﺔ ﺍﻻﺳﺘﺨﺪﺍﻡ ﻟﻌﻤﻠﻴﺔ ﺍﻹﺭﺳﺎﻝ ﻭﺍﻻﺳﺘﻘﺒﺎﻝ ﻳﻮﺿﺤﻬﺎ ﺍﳌﺜﺎﻝ‬ int int int int int mynode, totalnodes; datasize; //‫ﻋﺪد وﺣﺪات اﳌﻌﻄﻴﺎت اﻟﱵ ﺳﱰﺳﻞ أو ﺗﺴﺘﻘﺒﻞ‬ sender; //‫رﻗﻢ اﻹﺟﺮاﺋﻴﺔ اﳌﺮﺳﻠﺔ‬ receiver //‫رﻗﻢ اﻹﺟﺮاﺋﻴﺔ اﳌﺴﺘﻘﺒﻠﺔ‬ tag // ‫ﻋﺪد ﺻﺤﻴﺢ ﻳﺴﺘﺨﺪم آﻠﻘﺐ أو ﻋﻼﻣﺔ ﻟﻠﺮﺳﺎﻟﺔ‬ MPI_Status status; //‫ﻣﺘﻐﲑ ﳛﺘﻮي ﻣﻌﻠﻮﻣﺎت ﻋﻦ اﳊﺎﻟﺔ‬ MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &totalnodes); MPI_Comm_rank(MPI_COMM_WORLD, &mynode); // datasize ‫ﲢﺪﻳﺪ ﻋﺪد وﺣﺪات اﳌﻌﻄﻴﺎت‬ double * databuffer =new double[datasize]; //Fill in sender,receiver,tag on sender/receiver processes, //and fill in databuffer on the sender process. if(mynode==sender) MPI_Send(databuffer,datasize,MPI_DOUBLE,receiver, tag,MPI_COMM_WORLD); if(mynode==receiver) MPI_Recv(databuffer,datasize,MPI_DOUBLE,sender,tag, MPI_COMM_WORLD,&status); // ‫اﻧﺘﻬﺖ ﻋﻤﻠﻴﺔ اﻹرﺳﺎل واﻻﺳﺘﻘﺒﺎل‬ MPI ‫5.3.4 ﺏﺮاﻣﺞ ﺕﻄﺒﻴﻘﻴﺔ ﺏﺎﺳﺘﺨﺪام‬ ‫، ﻭﺣـﺎﻥ ﺍﻟﻮﻗـﺖ‬MPI ‫ﻬﺑﺬﺍ ﺍﻟﻘﺪﺭ ﻧﻜﻮﻥ ﻗﺪ ﻏﻄﻴﻨﺎ ﺍﳉﻮﺍﻧﺐ ﺍﻷﺳﺎﺳﻴﺔ ﻟﻠﱪﳎﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ‬ ‫ﻟﻜﻲ ﻧﻜﺘﺐ ﺑﺮﺍﻣﺞ ﺣﻘﻴﻘﺔ، ﻭﻓﻴﻤﺎ ﻳﻠﻲ ﺳﻨﺨﺘﻢ ﻫﺬﺍ ﺍﻟﻔﺼﻞ ﺑﻌﺮﺿﻨﺎ ﻟﻌﺪﺓ ﺑﺮﺍﻣﺞ، ﺍﳍﺪﻑ ﻣﻨـﻬﺎ‬ .‫ﻫﻮ ﺗﻄﺒﻴﻖ ﺍﳌﻌﻠﻮﻣﺎﺕ ﺍﻟﱵ ﻭﺭﺩﺕ ﻣﻌﻨﺎ ﺳﺎﺑﻘﺎ‬ ‫ﹰ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 121 ‫ﺏﺮﻥﺎﻣﺞ ﻹرﺳﺎل واﺳﺘﻘﺒﺎل اﻟﻤﻌﻄﻴﺎت‬ ‫ﺇﻥ ﺍﳍﺪﻑ ﻣﻦ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﻫﻮ ﺗﻮﺿﻴﺢ ﻭﻓﻬﻢ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺮﺍﺳﻞ ﺑﲔ ﺍﻹﺟﺮﺍﺋﻴـﺎﺕ، ﻭﻓﻴـﻪ‬ ‫ﺳﻮﻑ ﻧﻘﻮﻡ ﺑﺈﻧﺸﺎﺀ ﻣﺼﻔﻮﻓﺔ ﻋﻠﻰ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ، ﻭﻟﻜﻦ ﺳﺘﻜﻮﻥ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻬﻴﺌﺔ ﻟﻜﻞ ﺍﳌﺼﻔﻮﻓﺎﺕ‬ ‫، ﺗﻘـﻮﻡ‬P0 ‫(. ﻭﺑﻌﺪ ﺃﻥ ﺗﻜﻮﻥ ﲨﻴﻊ ﺍﳌﺼﻔﻮﻓﺎﺕ ﻗﺪ ﻫﻴﺌﺖ ﰲ ﺍﻹﺟﺮﺍﺋﻴﺔ‬P0)‫ﰲ ﺍﻹﺟﺮﺍﺋﻴﺔ ﺻﻔﺮ‬ .‫ ﺑﺎﻹﺭﺳﺎﻝ ﺇﱃ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ‬P0 ‫ﺍﻹﺟﺮﺍﺋﻴﺔ‬ 01 #include<iostream.h> 02 #include<mpi.h> 03 04 int main(int argc,char *argv){ 05 int i; 06 int nitems =10; 07 int mynode,totalnodes; 08 MPI_Status status; 09 10 double *array; 11 12 MPI_Init(&argc,&argv); 13 MPI_Comm_size(MPI_COMM_WORLD,&totalnodes); 14 MPI_Comm_rank(MPI_COMM_WORLD,&mynode); 15 16 array =new double[nitems]; 17 18 if(mynode ==0){ 19 for(i=0;i<nitems;i++) 20 array[i]=(double)i; 21 } 22 23 if(mynode==0) 24 for(i=1;i<totalnodes;i++) 25 MPI_Send(array,nitems,MPI_DOUBLE,i, 1,MPI_COMM_WORLD); 26 else 27 MPI_Recv(array,nitems,MPI_DOUBLE, 0,1,MPI_COMM_WORLD,&status); 28 29 for(i=0;i<nitems;i++){ 30 cout <<"Processor "<<mynode; 31 cout <<":array["<<i <<"]="<<array[i]<<endl; 32 } 33 34 deletearray; 35 ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫221‬ ‫;)(‪MPI_Finalize‬‬ ‫63‬ ‫} 73‬ ‫ﺕﺤﻠﻴﻞ اﻟﺒﺮﻥﺎﻣﺞ:‬ ‫• ﻗﻤﻨﺎ ﰲ ﺍﻟﺒﺪﺍﻳﺔ ﺑﺘﻬﻴﺌﺔ ‪ MPI‬ﻭ ﲨﻊ ﺍﳌﻌﻠﻮﻣﺎﺕ )ﺍﻷﺳﻄﺮ 21 ﺣﱴ 41(.‬ ‫• ﺑﻌﺪ ﺫﻟﻚ ﻗﻤﻨﺎ ﺑﺈﻧﺸﺎﺀ ﻣﺼﻔﻮﻓﺔ ﻋﻠﻰ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﳊﺠﺰ ﺍﻟﺪﻳﻨﺎﻣﻴﻜﻲ ﻟﻠﺬﺍﻛﺮﺓ.‬ ‫)ﺍﻟﺴﻄﺮ 61(‬ ‫• ﻋﻠﻰ ﺍﻹﺟﺮﺍﺋﻴﺔ 0‪ P‬ﻓﻘﻂ)ﺃﻱ ﺃﻥ 0=‪ ،(mynode‬ﻗﻤﻨﺎ ﺑﺘﻬﻴﺌﺔ ﺍﳌﺼﻔﻮﻓﺎﺕ ﻟﺘﺤﺘﻮﻱ ﻋﻠﻰ ﻗﻴﻤﺔ‬ ‫ﻣﺘﻐﲑ ﺍﻟﺪﻟﻴﻞ )ﺍﻷﺳﻄﺮ ﻣﻦ 81 ﺣﱴ 12(.‬ ‫• ﻋﻠﻰ ﺍﻹﺟﺮﺍﺋﻴﺔ 0‪ ،P‬ﻧﻘﻮﻡ ﺑﺎﺳﺘﺪﻋﺎﺀ ﺭﻭﺗﲔ ﺍﻹﺭﺳﺎﻝ ‪ MPI_Send‬ﺑﻌﺪﺩ )1-‪.(totalnodes‬‬ ‫)ﺍﻷﺳﻄﺮ ﻣﻦ 32 ﺣﱴ 52(‬ ‫• ﻋﻠﻰ ﻛﻞ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻋﺪﺍ ﺍﻹﺟﺮﺍﺋﻴﺔ 0‪ P‬ﻧﻘﻮﻡ ﺑﺎﺳﺘﺪﻋﺎﺀ ‪ MPI_Recv‬ﻭﺫﻟﻚ ﻻﺳـﺘﻘﺒﺎﻝ‬ ‫ﺍﻟﺮﺳﺎﻟﺔ )ﺍﻷﺳﻄﺮ ﻣﻦ 62 ﻭ 72(.‬ ‫• ﻋﻠﻰ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﻧﻘﻮﻡ ﺑﻄﺒﺎﻋﺔ ﻧﺎﺗﺞ ﻋﻤﻠﻴﺔ ﺍﻹﺭﺳﺎﻝ ﻭﺍﻻﺳﺘﻘﺒﺎﻝ )ﺍﻷﺳﻄﺮ ﻣﻦ 92 ﺣـﱴ‬ ‫23(.‬ ‫• ﻋﻠﻰ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ، ﻧﻘﻮﻡ ﺑﺘﺤﺮﻳﺮ ﺍﳌﻮﺿﻊ ﺍﻟﺬﺍﻛﺮﺓ ﺍﻟﺬﻱ ﲢﺘﻠﻪ ﺍﳌﺼﻔﻮﻓﺔ )ﺍﻟﺴﻄﺮ 43(.‬ ‫ﺏﺮﻥﺎﻣﺞ إرﺳﺎل اﻟﻤﻌﻄﻴﺎت ﺿﻤﻦ ﺡﻠﻘﺔ )‪(Ring‬‬ ‫ﻭﻇﻴﻔﺔ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﻫﻲ ﲨﻊ ﺃﺭﻗﺎﻡ ﺍﳌﻌﺎﳉﺎﺕ ﻋﻦ ﻃﺮﻳﻖ ﺗﺮﺍﺳﻞ ﺍﳌﻌﺎﳉﺎﺕ ﺑﺄﺳﻠﻮﺏ ﺍﳊﻠﻘـﺔ‬ ‫ﻭﻳﺘﻢ ﲤﺮﻳﺮ ﺃﺭﻗﺎﻡ ﺍﳌﻌﺎﳉﺎﺕ ﺧﻼﻝ ﺍﳊﻠﻘﺔ ﻟﻴﺘﻢ ﲨﻌﻬﺎ)ﻋﺮﻓﻨﺎ ﺳﺎﺑﻘﺎ ﺃﻥ ﻛﻞ ﻣﻌﺎﰿ ُﻌﻄﹶـﻰ ﺭﻗـﻢ‬ ‫ﻳ‬ ‫ﹰ‬ ‫ﺧﺎﺹ ﺑﻪ( ﻓﻤﺜﻼ ﻟﻮ ﻛﺎﻥ ﻟﺪﻳﻨﺎ ﺃﺭﺑﻌﺔ ﻣﻌﺎﳉﺎﺕ ﻓﺴﻴﺘﻢ ﲨﻊ ﺍﻟﻘﻴﻢ ﺍﻟﺘﺎﻟﻴﺔ ٠ + ١ + ٢ + ٣. ﰲ‬ ‫ﹰ‬ ‫ﺍﻟﱪﺍﻣﺞ ﺍﳌﺘﻘﺪﻣﺔ ﺗﻜﻮﻥ ﺍﻟﺒﻴﺎﻧﺎﺕ ﺍﳌﺮﺳﻠﺔ ﺫﺍﺕ ﻗﻴﻤﺔ ﻭﻟﻴﺲ ﳎﺮﺩ ﺍﻟﻘﻴﺎﻡ ﲟﺜﻞ ﻫﺬﻩ ﺍﻟﻌﻤﻠﻴﺔ.‬ ‫"‪#include "mpi.h‬‬ ‫>‪#include <iostream.h‬‬ ‫/*)‪/*Set up communication tags (these can be anything‬‬ ‫;102=‪const int to_right‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 123 const int to_left=102; void main { int int int (int argc,char *argv) value,new_value,procnum,numprocs; right,left; sum,i; MPI_Status recv_status; /*Initialize MPI */ MPI_Init(&argc,&argv); /*Find out this processor number */ MPI_Comm_rank(MPI_COMM_WORLD,&procnum); /*Find out the number of processors */ MPI_Comm_size(MPI_COMM_WORLD,&numprocs); /*Compute number of the processor to the right */ right =procnum +1; if (right ==numprocs) right =0; /*Compute number of the processor to the left */ left =procnum -1; if (left ==-1)left =numprocs-1; sum =0; value =procnum; for(i =0;i <numprocs;i++){ /*Send to the right */ MPI_Send(&value,1,MPI_INT,right,to_right, MPI_COMM_WORLD); /*Receive from the left */ MPI_Recv(&new_value,1,MPI_INT,left,to_right, MPI_COMM_WORLD,&recv_status); /*Sum the new value */ sum =sum +new_value; /*Update the value to be passed */ value =new_value; } /*Print out the partial sums at each step */ cout<<“PE ”<<procnum<<“: Partial sum =”<<sum<<endl; /*Print out the final result */ if (procnum ==0){ cout<< ”Sum of all processor numbers =” << sum << endl; } /*Shut down MPI */ MPI_Finalize(); ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 124 } return; :‫إذا ﺵﻐﻞ اﻟﺒﺮﻧﺎﻣﺞ اﻟﺴﺎﺑﻖ ﻋﻠﻰ ﺝﻬﺎز ﺑﺄرﺑﻊ ﻣﻌﺎﻟﺠﺎت، ﻓﺴﻴﻜﻮن ﻟﻪ اﻟﺨﺮج اﻟﺘﺎﻟﻲ‬ PE 1: Partial sum =0 PE 2: Partial sum =1 PE 3: Partial sum =2 PE 0: Partial sum =3 PE 1: Partial sum =3 PE 2: Partial sum =1 PE 3: Partial sum =3 PE 0: Partial sum =5 PE 1: Partial sum =5 PE 2: Partial sum =4 PE 3: Partial sum =3 PE 0: Partial sum =6 PE 1: Partial sum =6 PE 2: Partial sum =6 PE 3: Partial sum =6 PE 0: Partial sum =6 Sum of all processor numbers =6 ‫ﺏﺮﻥﺎﻣﺞ ﺟﻤﻊ ﺳﻠﺴﻠﺔ أﻋﺪاد‬ ‫ﻓﻴﻤﺎ ﻳﻠﻲ ﺳﻮﻑ ﻧﻘﻮﻡ ﺑﱪﳎﺔ ﻣﺜﺎﻝ ﻋﺪﺩﻱ ﺑﺴﻴﻂ، ﲝﻴﺚ ﻧﺮﻳﺪ ﺃﻥ ﳒﻤﻊ ﻛﻞ ﺍﻷﻋﺪﺍﺩ ﺑﺪﺍﻳﺔ ﻣﻦ‬ :‫ﺍﻟﻌﺪﺩ ١ ﻭﺣﱴ ٠٠٠١. ﰲ ﺍﻟﺒﺪﺍﻳﺔ ﺳﻮﻑ ﻧﺮﻯ ﺍﻟﺘﻄﺒﻴﻖ ﻟﻠﻜﻮﺩ ﺍﻟﺘﺴﻠﺴﻠﻲ‬ #include<iostream.h> int main(int argc,char **argv) { int sum; sum =0; for(int i=1;i<=1000;i=i+1) sum = sum + i; ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫521‬ ‫;‪<< "The sum from 1 to 1000 is:" << sum << endl‬‬ ‫‪cout‬‬ ‫}‬ ‫ﺍﻟﻜﻮﺩ ﺍﳌﻮﺿﺢ ﺃﻋﻼﻩ ﻳﻘﻮﻡ ﲜﻤﻊ ﺍﻷﻋﺪﺍﺩ ﺑﺎﻟﻄﺮﻳﻘﺔ ﺍﻟﺘﺴﻠﺴﻠﻴﺔ ﺍﳌﻌﺘﺎﺩﺓ ﻭﺍﻟﱵ ﺗﻌﻤـﻞ ﻋﻠـﻰ‬ ‫ﻣﻌﺎﰿ ﻭﺍﺣﺪ، ﻭﻟﻜﻦ ﻣﺎﺫﺍ ﻟﻮ ﺃﺭﺩﻧﺎ ﺃﻥ ﻧﺆﺩﻱ ﻫﺬﺍ ﺍﻟﻌﻤﻞ ﻋﻠﻰ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ؟ )ﻣﻊ ﺍﻧـﻪ ﻟـﻴﺲ‬ ‫ﻫﻨﺎﻟﻚ ﺣﺎﺟﺔ ﰲ ﻣﺜﻞ ﻫﺬﺍ ﺍﻟﻌﻤﻞ ﺇﱃ ﻋﺪﺓ ﻣﻌﺎﳉﺎﺕ، ﻭﺇﳕﺎ ﺍﳌﻘﺼﻮﺩ ﰲ ﻫﺬﺍ ﺍﳌﺜﺎﻝ ﻫـﻮ ﺷـﺮﺡ‬ ‫ﻛﻴﻔﻴﺔ ﲡﺰﻱﺀ ﺍﳌﺴﺄﻟﺔ ﻭﺑﺮﳎﺘﻬﺎ(.‬ ‫ﻟﻨﻔﺮﺽ ﺃﻥ ﻟﺪﻳﻨﺎ ﻣﻌﺎﳉﲔ ﻭﻧﺮﻳﺪ ﻣﻦ ﺍﳌﻌﺎﰿ ﺍﻷﻭﻝ ﺃﻥ ﻳﻘﻮﻡ ﲜﻤﻊ ﺍﻷﻋﺪﺍﺩ ﻣـﻦ 1 ﻭﺣـﱴ‬ ‫005، ﺃﻣﺎ ﺍﳌﻌﺎﰿ ﺍﻟﺜﺎﱐ ﻓﺴﻴﻘﻮﻡ ﲜﻤﻊ ﺍﻷﻋﺪﺍﺩ ﻣﻦ 105 ﻭﺣـﱴ 0001، ﻭﰲ ﺍﻟﻨﻬﺎﻳـﺔ ﲡﻤـﻊ‬ ‫ﺍﻟﻘﻴﻤﺘﺎﻥ ﺍﻟﻨﺎﲡﺘﺎﻥ ﻣﻌﺎ ﻟﻠﺤﺼﻮﻝ ﻋﻠﻰ ﺍﻟﻨﺎﺗﺞ ﺍﻟﻜﻠﻲ ﻟﻸﻋﺪﺍﺩ ﻣﻦ 1 ﻭﺣﱴ 0001. ﰲ ﺍﻟـﺸﻜﻞ‬ ‫ﹰ‬ ‫)3-4( ﺭﺳﻢ ﲣﻄﻴﻄﻲ ﻋﺎﻡ، ﻭﻟﺪﻳﻨﺎ ﻓﻴﻪ ‪ P‬ﻣﻌﺎﰿ )ﰲ ﺍﻟﺸﻜﻞ)3-4(: 8=‪ (P‬ﻭﺍﳌﺴﺄﻟﺔ ﳎﺰﺋـﺔ ﺇﱃ ‪P‬‬ ‫ﻣﺴﺄﻟﺔ ﻓﺮﻋﻴﺔ، ﻭ ﻋﻨﺪﻣﺎ ﺗﻨﺘﻬﻲ ﲨﻴﻊ ﺍﳌﻌﺎﳉﺎﺕ ﺗﻘﻮﻡ ﺑﺈﺭﺳﺎﻝ ﺍﻟﻨﻮﺍﺗﺞ ﺇﱃ ﺍﳌﻌـﺎﰿ 0‪ P‬ﻭﺫﻟـﻚ‬ ‫ﻟﻠﺘﺠﻤﻴﻊ ﺍﻟﻨﻬﺎﺋﻲ ﻟﻠﻨﺎﺗﺞ.‬ ‫اﻟﺸﻜﻞ)3-4(: ﺕﺘﺠﻤﻊ آﻞ اﻟﻤﻌﻠﻮﻣﺎت إﻟﻰ ﻣﻌﺎﻟﺞ واﺡﺪ ﺏﺎﺳﺘﺨﺪام اﻹرﺳﺎل واﻻﺳﺘﻘﺒﺎل‬ ‫ﺳﻨﻘﻮﻡ ﺑﺘﺠﺰﻱﺀ ﺍﻟﻌﻤﻠﻴﺔ ﻣﻦ ﺧﻼﻝ ‪:MPI‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫621‬ ‫ﻭﺭﺩ ﰲ ﺍﻟﺴﺎﺑﻖ ﺃﻥ ﻛﻞ ﺇﺟﺮﺍﺋﻴﺔ ﳝﻜﻦ ﳍﺎ ﺃﻥ ﺗﻌﺮﻑ ﺍﻟﻌﺪﺩ ﺍﻟﻜﻠـﻲ ﻟﻺﺟﺮﺍﺋﻴـﺎﺕ ﺍﻟـﱵ‬ ‫ﺳُﺘﺴَﺘﺨﺪﻡ )ﻋﻦ ﻃﺮﻳﻖ ﺍﺳﺘﺨﺪﺍﻡ ‪ ،(MPI_Comm_size‬ﻭﻭﺭﺩ ﺃﻳﻀﺎ ﻛﻴﻒ ﻟﻺﺟﺮﺍﺋﻴﺔ ﺃﻥ ﺗﻌﺮﻑ‬ ‫َ‬ ‫ﻧﻔﺴﻬﺎ ﺃﻭ ﺭﺗﺒﺘﻬﺎ)ﺑﺎﺳﺘﺨﺪﺍﻡ ‪ .(MPI_Comm_rank‬ﻭﻟﻨﻔﺮﺽ ﺃﻥ ﺍﳌﺘﻐﲑ ‪ mynode‬ﳜﺰﻥ ﻧـﺎﺗﺞ‬ ‫ﺍﺳﺘﺪﻋﺎﺀ ‪ ،MPI_Comm_rank‬ﻭﻟﻨﻔﺮﺽ ﺃﻳﻀﺎ ﺃﻥ ﺍﳌﺘﻐﲑ ‪ totalnodes‬ﺳﻴﺨﺰﻥ ﻧﺎﺗﺞ ﺍﺳـﺘﺪﻋﺎﺀ‬ ‫‪ .MPI_Comm_size‬ﻭﻋﻠﻰ ﻫﺬﺍ ﻓﺎﻟﺼﻴﻐﺔ ﻟﺘﻘﺴﻴﻢ ﻋﻤﻠﻴﺔ ﺍﳉﻤﻊ ﻋﱪ ﺍﻹﺟﺮﺍﺋﻴﺎﺕ ﻣﻌﻄﺎﺓ ﺑﺎﻟﻜﻮﺩ‬ ‫ﺍﻟﺘﺎﱄ:‬ ‫;1+‪startval = 1000*mynode/totalnodes‬‬ ‫;‪endval = 1000*(mynode+1)/totalnodes‬‬ ‫ﺇﺫﺍ ﻛﻨﺎ ﻧﺴﺘﺨﺪﻡ ﻣﻌﺎﰿ ﻭﺍﺣﺪ ﻓﻘﻂ ﻓﺴﺘﻜﻮﻥ 1= ‪ totalnodes‬ﻭﺳـﺘﻜﻮﻥ 0= ‪،mynode‬‬ ‫ﻭﺇﺫﻥ ﺳﺘﻜﻮﻥ ﻧﻘﻄﺔ ﺍﻟﺒﺪﺍﻳﺔ 1=‪ startval‬ﻭﺍﻟﻨﻬﺎﻳﺔ 0001= ‪ .endval‬ﺃﻣﺎ ﺇﺫﺍ ﻛﻨﺎ ﻧﺴﺘﺨﺪﻡ ﻣﻌﺎﳉﲔ‬ ‫ﻓﺴﺘﻜﻮﻥ 2= ‪ totalnodes‬ﻭ ﺳﺘﻜﻮﻥ ‪ mynode‬ﺇﻣـﺎ 0 ﺃﻭ 1. ﻋﻨـﺪ 0= ‪ mynode‬ﺳـﺘﻜﻮﻥ‬ ‫1= ‪ startval‬ﻭ 005= ‪ ،endval‬ﺃﻣﺎ ﻋﻨﺪ 1= ‪ mynode‬ﺳـﺘﻜﻮﻥ 105= ‪ startval‬ﻭ ‪endval‬‬ ‫0001=. ﻭﻫﻜﺬﺍ ﳝﻜﻦ ﺍﻻﺳﺘﻤﺮﺍﺭ ﺣﱴ ﻳﻜﻮﻥ ﻫﻨﺎﻟﻚ 0001 ﻣﻌﺎﰿ، ﺳﻴﻘﻮﻡ ﻋﻨﺪﻫﺎ ﻛﻞ ﻣﻌـﺎﰿ‬ ‫ﺑﺄﺧﺬ ﻋﺪﺩ ﻭﺍﺣﺪ )ﻟﻴﺲ ﻫﻨﺎﻙ ﲨﻊ!( ﻭﺳﺘﺮﺳﻞ ﻛﻞ ﺍﻟﻘﻴﻢ ﺇﱃ ﺍﳌﻌﺎﰿ ﺻﻔﺮ)ﻳﺒﺪﺃ ﺗﺮﻗﻴﻢ ﺍﳌﻌﺎﳉﺎﺕ‬ ‫ﻣﻦ ﺻﻔﺮ( ﻟﻠﺘﺠﻤﻴﻊ.‬ ‫ﻋﻨﺪﻣﺎ ﻳﻜﻮﻥ ﻟﺪﻳﻨﺎ ﻗﻴﻤﺔ ﺑﺪﺍﻳﺔ ﻭﻗﻴﻤﺔ ﻬﻧﺎﻳﺔ ﻟﻌﻤﻠﻴﺔ ﺍﳉﻤﻊ ﻓﺈﻥ ﻛﻞ ﻣﻌﺎﰿ ﳝﻜﻦ ﺃﻥ ﻳﻨﻔـﺬ‬ ‫ﻋﺒﺎﺭﺓ ﺍﻟﺘﻜﺮﺍﺭ ‪ for‬ﻭﺍﻟﱵ ﺗﻘﻮﻡ ﺑﺘﺠﻤﻴﻊ ﺍﻟﻘﻴﻢ ﺍﻟﱵ ﺑﲔ ‪ startval‬ﻭ ‪ .endval‬ﻭﺑﻌﺪ ﺃﻥ ﻳﻨﺘﻬﻲ ﻛﻞ‬ ‫ﻣﻌﺎﰿ ﻣﻦ ﺣﺴﺎﺏ ﺍﺠﻤﻟﻤﻮﻉ ﺍﳌﻮﻛﻞ ﺇﻟﻴﻪ ﻓﺈﻥ ﲨﻴﻊ ﺍﳌﻌﺎﳉﺎﺕ )ﻣﺎﻋﺪﺍ ﺍﳌﻌﺎﰿ ﺻﻔﺮ( ﺳﺘﻘﻮﻡ ﺑﺈﺭﺳﺎﻝ‬ ‫ﺍﺠﻤﻟﻤﻮﻉ ﺍﻟﺬﻱ ﺣﺼﻠﺖ ﻋﻠﻴﻪ ﺇﱃ ﺍﳌﻌﺎﰿ ﺻﻔﺮ، ﻭﻬﺑﺬﺍ ﳓﺼﻞ ﺍﻟﻨﺎﺗﺞ ﺍﻟﻜﻠﻲ.‬ ‫ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﺘﺎﱄ ﻫﻮ ﺑﺮﻧﺎﻣﺞ ‪ C++/MPI‬ﻹﳒﺎﺯ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻟﺴﺎﺑﻘﺔ:‬ ‫>‪#include<iostream.h‬‬ ‫>‪#include<mpi.h‬‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 127 int main(int argc,char **argv) { int mynode,totalnodes; int sum,startval,endval,accum; MPI_Status status; MPI_Init(argc,argv); MPI_Comm_size(MPI_COMM_WORLD,&totalnodes);//get totalnodes MPI_Comm_rank(MPI_COMM_WORLD,&mynode); //get mynode sum =0; //zero sum for accumulation startval =1000*mynode/totalnodes+1; endval =1000*(mynode+1)/totalnodes; for(int i=startval;i<=endval;i=i+1) sum =sum +I; if(mynode!=0) MPI_Send(&sum,1,MPI_INT,0,1,MPI_COMM_WORLD); else for(int j=1;j<totalnodes;j=j+1) { MPI_Recv(&accum,1,MPI_INT,j,1,MPI_COMM_WORLD, &status); sum =sum +accum; } if(mynode ==0) cout <<"The sum from 1 to 1000 is:"<<sum <<endl; MPI_Finalize(); } ________________________ ‫ﺏﺮﻥﺎﻣﺞ اﻟﻔﺮز اﻟﺰوﺟﻲ-اﻟﻔﺮدي‬ ‫ﻳﻘﻮﻡ ﻫﺬﺍ ﺍﻟﱪﻧﺎﻣﺞ ﺑﻔﺮﺯ ﳎﻤﻮﻋﺔ ﺃﻋﺪﺍﺩ ﺑﻄﺮﻳﻘﺔ ﺧﻮﺍﺭﺯﻣﻴﺔ ﺍﻟﻔﺮﺯ ﺍﻟﺰﻭﺟﻲ-ﺍﻟﻔﺮﺩﻱ، ﻭﻫﻲ‬ ‫ﺇﺣﺪﻯ ﺍﳋﻮﺍﺭﺯﻣﻴﺎﺕ ﺍﻟﱵ ﺷﺎﻫﺪﻧﺎﻫﺎ ﰲ ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻟﺚ ﺿﻦ ﺍﻟﻘﺴﻢ "ﺃﻣﺜﻠـﺔ ﻟﻠﺨﻮﺍﺭﺯﻣﻴـﺎﺕ‬ :‫ﺍﳌﺘﻮﺍﺯﻳﺔ". ﻭﻟﻜﻦ ﻗﺒﻞ ﻛﺘﺎﺑﺔ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﺳﻨﻜﺘﺐ ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﺘﺴﻠﺴﻠﻲ ﻟﻠﺨﻮﺍﺭﺯﻣﻴﺔ‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 128 void ODD_EVEN(int n){ int i,j,temp; for(i=1;i<=n;i=i+1) { if(i%2==1){ for(j=0;j<=n/2-1;j=j+1) if(a[2*j]>a[2*j+1]) { temp=a[2*j]; a[2*j]=a[2*j+1]; a[2*j+1]=temp; } } else{ for(j=1;j<=n/2-1;j=j+1) if(a[2*j-1]>a[2*j]) { temp=a[2*j-1]; a[2*j-1]=a[2*j]; a[2*j]=temp; } } } } .‫ﻬﻧﺎﻳﺔ ﺍﻟﱪﻧﺎﻣﺞ ﺍﻟﺘﺴﻠﺴﻠﻲ‬ :‫ﻓﻴﻤﺎ ﻳﻠﻲ ﺍﻟﱪﻧﺎﻣﺞ ﺍﳌﺘﻮﺍﺯﻱ ﻟﻨﻔﺲ ﺍﳌﺴﺄﻟﺔ‬ #include <stdlib.h> #include <mpi.h> /* Include MPI's header file */ main(int argc, char *argv) { int n; /* The total number of elements to be sorted */ int npes; /* The total number of processes */ int myrank; /* The rank of the calling process */ int nlocal; /* The local number of elements, and the array that stores them */ int *elmnts; /* The array that stores the local elements */ int *relmnts; /* The array that stores the received elements */ int oddrank; /* The rank of the process during odd-phase communication */ int evenrank; /* The rank of the process during even-phase communication */ int *wspace; /* Working space during the compare-split operation */ int i; MPI_Status status; ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 129 /* Initialize MPI and get system information */ MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); n = atoi(argv[1]); nlocal = n/npes; /* Compute the number of elements to be stored locally. */ /* Allocate memory for the various arrays */ elmnts = (int *)malloc(nlocal*sizeof(int)); relmnts = (int *)malloc(nlocal*sizeof(int)); wspace = (int *)malloc(nlocal*sizeof(int)); /* Fill-in the elmnts array with random elements */ srandom(myrank); for (i=0; i<nlocal; i++) elmnts[i] = random(); /* Sort the local elements using the built-in quicksort routine */ qsort(elmnts, nlocal, sizeof(int), IncOrder); // Determine //during the if (myrank%2 oddrank = evenrank = } else { oddrank = evenrank = } the rank of the processors that myrank needs to communicate odd and even phases of the algorithm */ == 0) { myrank-1; myrank+1; myrank+1; myrank-1; /* Set the ranks of the processors at the end of the linear */ if (oddrank == -1 || oddrank == npes) oddrank = MPI_PROC_NULL; if (evenrank == -1 || evenrank == npes) evenrank = MPI_PROC_NULL; /* Get into the main loop of the odd-even sorting algorithm */ for (i=0; i<npes-1; i++) { if (i%2 == 1) /* Odd phase */ MPI_Sendrecv(elmnts, nlocal, MPI_INT, oddrank, 1, relmnts, nlocal, MPI_INT, oddrank, 1, MPI_COMM_WORLD, &status); else /* Even phase */ MPI_Sendrecv(elmnts, nlocal, MPI_INT, evenrank, 1, relmnts, nlocal, MPI_INT, evenrank, 1, MPI_COMM_WORLD, &status); } } CompareSplit(nlocal, elmnts, relmnts, wspace, myrank < status.MPI_SOURCE); free(elmnts); free(relmnts); free(wspace); MPI_Finalize(); /* This is the CompareSplit function */ CompareSplit(int nlocal, int *elmnts, int *relmnts, int *wspace, int keepsmall) { int i, j, k; ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 130 for (i=0; i<nlocal; i++) wspace[i] = elmnts[i]; /* Copy the elmnts array into the wspace array */ } if (keepsmall) { /* Keep the nlocal smaller elements */ for (i=j=k=0; k<nlocal; k++) { if (j == nlocal || (i < nlocal && wspace[i] < relmnts[j])) elmnts[k] = wspace[i++]; else elmnts[k] = relmnts[j++]; } } else { /* Keep the nlocal larger elements */ for (i=k=nlocal-1, j=nlocal-1; k>=0; k--) { if (j == 0 || (i >= 0 && wspace[i] >= relmnts[j])) elmnts[k] = wspace[i--]; else elmnts[k] = relmnts[j--]; } } /* The IncOrder function that is called by qsort is defined as follows */ int IncOrder(const void *e1, const void *e2) { return (*((int *)e1) - *((int *)e2)); } ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ‫131‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ 132 @ @ÉuaŠ½a@áçc [1] Daniel C.Hyde. Introduction to the Principles of Parallel Computation. Bucknell University, 1998 [2] Daniel C.Hyde. Introduction to the Programming Language Occam. Bucknell University, 1995 [3] Jim Buyens, Microsoft frontPage Version 2002 Inside Out, MSPress, 2001 [4] Karniadakis, G., and Kirby II, R., Parallel Scientific Computing in C ++and MPI. Cambridge University Press, 2003. [5] Kumar, V , Gupta, A., Karypis, G., and Grama, A., Introduction to Parallel Computing, Second Edition. Addison Wesley Publishers, 2003. [6] Message Passing Interface (MPI), http://www.mhpcc.edu/training/workshop/mpi/MAIN.html [7] Microsoft Computer Dictionary, Fifth Edition. MSPress, 2002. [8] Mikhail J. Atallah. Algorithms and Theory of Computation Handbook. CRC Press, 1998 [9] Null, L., and Lobur, J., The Essentials of Computer Organization and Architecture. Jones and Bartlett Publishers, 2003. [10] Parhami, B., Introduction to Parallel Processing Algorithms and Architectures. KLUWER ACADEMIC PUBLISHERS, 2002 [11] Richard Jenkins. Supercomputers of today and tomorrow. TAB BOOKS Inc., 1986. [12] TOP500 Supercomputer site, http://www.top500.org/. [13] Tutorial on Message Passing Interface Programming MPI(MPICH for Windows) [14] Message Passing Interface (MPI) Programming Lab http://www.dhpc.adelaide.edu.au/education/CS7933/lab/MPI/MPIlab.html .2001 ،‫]51[ ﻣﻌﺠﻢ ﻣﺼﻄﻠﺤﺎت اﻟﻜﻤﺒﻴﻮﺕﺮ، اﻟﺪار اﻟﻌﺮﺏﻴﺔ ﻟﻠﻌﻠﻮم‬ .2001 ،‫]61[ اﻟﺮﺏﻴﻌﻲ وﺁﺥﺮون. اﻟﻤﻌﺠﻢ اﻟﺸﺎﻣﻞ ﻟﻤﺼﻄﻠﺤﺎت اﻟﺤﺎﺳﺐ اﻵﻟﻲ واﻹﻥﺘﺮﻥﺖ. ﻣﻜﺘﺒﺔ اﻟﻌﺒﻴﻜﺎن‬ ‫اﻟﺤﺎﺳﺒﺎت اﻟﻤﺘﻮازﻳﺔ و اﻟﺨﻮارزﻣﻴﺎت اﻟﻤﺘﻮازﻳﺔ‬ ...
View Full Document

Ask a homework question - tutors are online