THE DESIGN AND ANALYSIS OF PARALLEL ALGORITHMS SELIM G.AKL PDF

Goodreads helps you keep track of books you want to read. Want to Read saving…. Want to Read Currently Reading Read. Other editions. Enlarge cover. Error rating book.

Author:Mikarr Tojajin
Country:Maldives
Language:English (Spanish)
Genre:Marketing
Published (Last):25 July 2013
Pages:205
PDF File Size:17.98 Mb
ePub File Size:18.88 Mb
ISBN:828-8-75815-681-7
Downloads:88800
Price:Free* [*Free Regsitration Required]
Uploader:Vumi



The Design and Analysis of Parallel Algorithms. Selim G. Queen's U nioersity Kingston, Ontario, Canada. Bibliography: p. Includes Index. ISBN 0- 3. Parallel programming Computer science 2. Manufacturing buyer: Mary Noonan. All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Printed in the United States of America. To Theo,. For making it worthwhile. I The Need for Parallel Computers, 1.

References, EREW I Introduction, Storage Requirements, 33 1. The need for ever faster computers has not ceased since the beginning of the computer era. Every new application seems to push existing computers to their limit. So far, computer manufacturers have kept up with the demand admirably well. In , the electronic components used to build computers could switch from one state to another about 10, times every second.

These figures mean that the number of operations a computer can do in one second has doubled, roughly every two years, over the past forty years. This is very impressive, but how long can it last? It is generally believed that the trend will remain until the end of this century. It may even be possible to maintain it a little longer by using optically based or even biologically based components. What happens after that?

If the current and contemplated applications of computers are any indication, our requirements in terms of computing speed will continue, at least at the same rate as in the past, well beyond the year Already, computers faster than any available today are needed to perform the enormous number of calculations involved in developing cures to mysterious diseases.

They are essential to applications where the human ability to recognize complex visual and auditory patterns is to be simulated in real time. And they are indispensable if we are to realize many of humanity's dreams, ranging from reliable long-term weather forecasting to interplanetary travel and outer space exploration.

It appears now that parallel processing is the way to achieve these desired computing speeds. The overwhelming majority of computers in existence today, from the simplest to the most powerful, are conceptually very similar to one another. Their architecture and mode of operation follow, more or less, the same basic design principles formulated in the late s and attributed to John von Neumann. The ingenious scenario is very simple and essentially goes as follows: A control unit fetches an instruction and its operands from a memory unit and sends them to a processing unit; there the instruction is executed and the result sent back to memory.

This sequence of events is repeated for each instruction. There is only o n e unit of each kind, and only one instruction can be executed at a time. With parallel processing the situation is entirely different. A parallel computer is one that consists of a collection of processing units, or processors, that cooperate to solve a problem by working simultaneously on different parts of that problem.

The number of processors used can range from a few tens to several millions. As a result, the time required to solve the problem by a traditional uniprocessor computer is significantly reduced. This approach is attractive for a number of reasons. First, for many computational problems, the natural solution is a parallel one. Second, the cost and size of computer components have declined so sharply in recent years that parallel computers with a large number of processors have become feasible.

And, third, it is possible in parallel processing to select the parallel architecture that is best suited to solve the problem or class of problems under consideration. Indeed, architects of parallel computers have the freedom to decide how many processors are to be used, how powerful these should be, what interconnection network links them to one another, whether they share a common memory, to what extent their operations are to be carried out synchronously, and a host of other issues.

This wide range of choices has been reflected by the many theoretical models of parallel computation proposed as well as by the several parallel computers that were actually built. Parallelism is sure to change the way we think about and use computers.

It promises to put within our reach solutions to problems and frontiers of knowledge never dreamed of before. The rich variety of architectures will lead to the discovery of novel and more efficient solutions to both old and new problems.

It is important therefore to ask: How do we solve problems on a parallel computer? The primary ingredient in solving a computational problem on any computer is the solution method, or algorithm. This book is about algorithms for parallel computers. It describes how to go about designing algorithms that exploit both the parallelism inherent in the problem and that available on the computer.

It also shows how to analyze these algorithms in order to evaluate their speed and cost. The computational problems studied in this book are grouped into three classes:. These problems were chosen due to their fundamental nature.

It is shown how a parallel algorithm is designed and analyzed to solve each problem. In some cases, several algorithms are presented that perform the same job, each on a different model of parallel com - putation. Examples are used as often as possible to illustrate the algorithms.

Where necessary, a sequential algorithm is outlined for the problem at hand. Additional algorithms are briefly described in the Problems and Bibliographical Remarks sections. A list of references to other publications, where related problems and algorithms are treated, is provided at the end of each chapter.

The book may serve as a text for a graduate course on parallel algorithms. It was used at Queen's University for that purpose during the fall term of The class met for four hours every week over a period of twelve weeks. One of the four hours was devoted to student presentations of additional material, reference to which was found in the Bibliographical Remarks sections. The book should also be useful to computer scientists, engineers, and mathematicians who would like to learn about parallel.

It is assumed that the reader possesses the background normally provided by an undergraduate introductory course on the design and analysis of algorithms. The most pleasant part of writing a book is when one finally gets a chance to thank those who helped make the task an enjoyable one.

Four people deserve special credit: Ms. Irene LaFleche prepared the electronic version of the manuscript with her natural cheerfulness and unmistakable talent. The diagrams are the result of Mr. Mark Attisha's expertise, enthusiasm, and skill. Bruce Chalmers offered numerous trenchant and insightful comments on an early draft. Advice and assistance on matters big and small were provided generously by Mr. Thomas Bradshaw. I also wish to acknowledge the several helpful suggestions made by the students in my CISC class at Queen's.

The support provided by the staff of Prentice Hall at every stage is greatly appreciated Finally, I am indebted to my wife, Karolina, and to my two children, Sophia and Theo, who participated in this project in more ways than I can mention.

Theo, in particular, spent the first year of his life examining, from a vantage point, each word as it appeared on my writing pad. Akl Kingston, Ontario. The data represent information on the earth's weather, pollution, agriculture, and natural resources.

In order for this information to be used in a timely fashion, it needs to be processed at a speed of at least operations per second. Back on earth, a team of surgeons wish to view on a special display a reconstructed three-dimensional image of a patient's body in preparation for surgery.

They need to be able to rotate the image at will, obtain a cross-sectional view of an organ, observe it in living detail, and then perform a simulated surgery while watching its effect, all without touching the patient. A minimum processing speed of operations per second would make this approach worthwhile. The preceding two examples are representative of applications where trem- endously fast computers are needed to process vast amounts of data or to perform a large number of calculations quickly or at least within a reasonable length of time.

Other such applications include aircraft testing, the development of new drugs, oil exploration, modeling fusion reactors, economic planning, cryptanalysis, managing large databases, astronomy, biomedical analysis, real-time speech recognition, robo- tics, and the solution of large systems of partial differential equations arising from numerical simulations in disciplines as diverse as seismology, aerodynamics, and atomic, nuclear, and plasma physics.

No computer exists today that can deliver the processing speeds required by these applications. Even the so-called supercomputers peak at a few billion operations per second. Over the past forty years dramatic increases in computing speed were achieved. Most of these were largely due to the use of inherently faster electronic components by computer manufacturers.

As we went from relays to vacuum tubes to transistors and from smaH to medium to large and then to very large scale integration, we witnessed- often in amazement- the growth in size and range of the computational problems that we could solve.

Unfortunately, it is evident that this trend will soon come to an end. The limiting factor is a simple law of physics that gives the speed of light in vacuum. This speed is. Now, assume that an electronic device can perform operations per second. Then it takes longer for a signal to travel between two such devices one-half of a millimeter apart than it takes for either of them to process it.

In other words, all the gains in speed obtained by building superfast electronic components are lost while one component is waiting to receive some input from another one.

IEC 947-4-1 PDF

The Design of Efficient Parallel Algorithms

View Larger Image. Ask Seller a Question. Title: The Design and Analysis of Parallel Publisher: Prentice Hall Press. New textbook on a hot topic. It is appropriate for a graduate course. Includes many worked examples.

JOHN SHEARMAN MANNERISM PDF

The Design and Analysis of Parallel Algorithms

Handbook on Parallel and Distributed Processing pp Cite as. This chapter serves as an introduction to the study of parallel algorithms, in particular how they differ from conventional algorithms, how they are designed, and how they are analyzed to evaluate their speed and cost. Unable to display preview. Download preview PDF.

Related Articles