Page 125 - Computer_Science_F5
P. 125

Computer Science  Data Level Parallelism      (d)  DLP can contribute to reduced power
                                                         consumption by potentially enabling
           While TLP focuses on dividing the task
                                                         faster execution times. By completing
           itself, DLP focuses on dividing the data.
                                                         tasks quicker, processors can spend
           For instance, if the task is sorting candy,
                                                         less time in active states and more
           DLP would focus on dividing the candy.
          FOR ONLINE READING ONLY
           Each friend could grab a handful of candy
                                                         finding the right balance between
           and sort them by color independently. In      time in low-power modes. However,
           the context of the pizza party, DLP might     parallelism and energy efficiency
           involve dividing the task of decorating       requires careful optimisation.
           the room further. One friend could hang
           streamers  concurrently  while  another  Challenges of DLP
           puts  up balloons,  effectively  utilising  (a)  Data Dependencies: DLP may not
           multiple  “cores”  (people)  to  complete     be suitable for tasks with high data
           the  task faster. Data-Level Parallelism      dependencies, where processing one
           (DLP) is  a  technique  for  improving        data element relies on the results of
           program  performance  by  distributing        another.
           operations on large datasets across       (b)  Granularity: Finding the optimal data
           multiple  processing units. Instead of        chunk size for efficient processing can
           focusing on dividing the task itself (like    be crucial. Too small chunks can lead
           in TLP), DLP emphasises splitting  the        to increased overhead, while overly
           data into smaller chunks and processing       large chunks might not utilise all
           them simultaneously on different cores        processing units effectively.
           or processors.                            (c)  Algorithm design:  Implementing

                                                         data-level parallelism requires careful
           Advantages offered by DLP                     algorithm design to ensure proper
           (a)  Improved performance:  Dividing          data division and synchronization
               the workload amongst multiple             between processing units.
               processing units can significantly
               reduce execution time  for data-
               intensive tasks.                      Applications of DLP

           (b)  Scalability:  DLP applications       (a)  Image processing:  Consider an
               can potentially scale well with           application that needs to apply a filter
               increasing hardware resources (cores,     (Such as, blur and sharpen) to a high-
               processors) by further dividing the       resolution image. DLP allows us to
               data for parallel processing.             divide the image into smaller tiles.
                                                         Each core can then independently
           (c)  Reduced overhead: Compared to TLP,       apply the filter to its assigned
               DLP often incurs less overhead for        tile, significantly accelerating the
               context switching and maintaining         processing time compared to a single-
               separate thread states.                   core approach.


                                                 116
                                                                for Advanced Secondary Schools



     Computer Science Form 5.indd   116                                                     23/07/2024   12:33
   120   121   122   123   124   125   126   127   128   129   130