By Dr. Chris Daft

What is Parallel Computing?

Parallel computing is a simple concept: it is using more than one processor (or CPU) to complete a data processing task. A couple of decades ago, parallel computing was an arcane branch of computer science.  Now, it is everywhere—in cell phones, web sites, laptops and even wearables.  This radical shift was motivated by two factors:

  1. Processors are no longer getting faster. In fact, their speed has hardly changed since 2003. This is because further speed increases make the chips too hot to be economically viable.
  2. Growth in the economy of developed countries is increasingly predicated on the availability of computing resources whose cost drops rapidly over time.

Point (1) is illustrated in the graph of Figure 1. Put simply, microprocessors – the chips which are the “brains” of cell phones, tablets and computers – stopped getting faster 15 years ago:

6-parallelcomputing-blog-post

Figure 1: Processing speed of a single processor, plotted against year of introduction. For many years the speed increased rapidly, but this trend came to a halt around 2003.

Point (2) is a result of how generic data-processing technology is.  It benefits a vast range of business activities.  Statistician Gordon Reikard writes,

The reason computers and software had such a powerful influence was that their effect was not limited to a single industry. Information technology (IT) could generate substantial spillover effects into other sectors. Examples include local area networks, computer-aided design (CAD-CAM), electronic banking, Internet retailing, statistical quality control, computerized inventory control, and faster communication of ideas. Industrial firms could use computers to reduce cycle times, achieve fewer defects, control inventory, and do specialized production runs tailoring manufacturing to demand. Computers not only made industrial processes more efficient, they made research itself more efficient, since R&D could now be performed with advanced software, leading to faster development and better products.
(emphasis added)

Examples of Parallel Computing, as explained by an expert witness

We can distinguish two broad kinds of parallel computing. The first is grid computing, where software allows a group of computers to work on a large problem together. This takes advantage of the large amount of time where computers are waiting, rather than doing useful work.  Grid computing is applied to humanitarian projects such as the search for cancer drugs, and protein folding which underlies diseases such as Alzheimer’s, cystic fibrosis, and BSE (Mad Cow disease). It is also the foundation of cloud Internet services.  Google and Facebook do not run on large computers, but on clusters of PC-sized devices.

The second kind of parallelism occurs within an integrated circuit (IC) itself. Faced with the roadblock in the speed of individual processors, semiconductor companies created ICs with several processors on a single chip.  Typical laptops now contain four “cores”, as these are termed.  Server CPUs contain more.

Video gaming and emerging applications such as deep learning have taken parallel computing within a chip to new levels. These computational tasks respond well to chips which have thousands of simple CPUs on a single piece of silicon.  Originally specialized to improve the appearance and realism of textures and shading within a video game, Graphics Processing Units (GPUs) are now in every smartphone and laptop.

Parallel Computing Litigation

Parallel computing is, without doubt, a “hot” area. Leaders in the space such as Nvidia and AMD assemble large engineering teams to design increasingly parallel chips on the highest-technology semiconductor processes.  It is no surprise that litigation crops up in such an economically valuable area.  Here are two examples.

The University of Wisconsin sued Apple over the way parallel computing is implemented in the processors which control the iPhone and iPad (the A7 CPU and its successors). The university was awarded $862 million in damages in 2015.

Another patent battle was waged between Intel and Integraph Corp., over what became Intel’s EPIC (explicitly parallel instruction computing) architecture. This litigation covered the way Intel’s IA-64 server chips work.  These are the ICs which run most of the Internet.  When you look at a Facebook page or store a file on Dropbox, it is probably this type of chip which is delivering the data for you.

With these and other applications, it’s clear that the transformation of parallel computing from research oddity to mainstream economic engine is complete. As litigation continues to occur, expert witnesses in parallel computing will be in need.

About the author

Dr. Chris Daft is an award winning, Oxford Educated scientist. He is an expert witness and consultant whose areas of expertise include parallel computing, MEMS, transducers, ultrasound and medical imaging. Dr. Daft has extensive Intellectual Property experience including patent development, analysis, licensing, and strategy.  He is a serial inventor who holds 22 U.S. Patents with more pending.  Dr. Daft holds a BA and MA in Physics from Oxford as well as Doctorate from Oxford in Materials Science.  The author may be contacted at:

chris.daft@riversonicsolutions.com
+1 (415) 800-3734   +1 (408) 806 7525
River Sonic Solutions LLC 2443 Fillmore St #380-4039, San Francisco, CA 94115.

References

Wisconsin Alumni Research Foundation v. Apple Inc, 3:14-cv-00062, United States District Court for the Western District of Wisconsin, January 21, 2014.

Intergraph Corp. v. Intel Corp., 2:01-cv-00160, United States District Court for the Eastern District of Texas, July 30, 2001.