CAD/GIS platforms must go to the GPU

Those of us, who are users of graphical applications, are always expectant that equipments have sufficient working memory. In this, CAD / GIS programs has always been questioned or measured in terms of the time it takes to perform daily activities such as:

  • Spatial Analysis
  • Correction and image registration
  • Deployment of massive data
  • Managing data within a geodatabase
  • Data Service

The typical PC has not changed much in recent years, in terms of RAM, hard disk, graphics memory and features that have only been increasing, but the logic operating of the CPU has maintained its original design (That’s why we keep calling it CPU). It It has also been a disadvantage that equipments grow in capacity; programs kill its expectation by designing to consume the new potential.


As an example, (and only for example), when two users are placed simultaneously on the same terms of equipment and data, one with AutoCAD 2010 and the other with Microstation V8i, loading 14 raster images, a plot of 8,000 properties and connection to an Oracle Spatial database, we ask the question:

What does one of the two have, to avoid overburdening the machine?

The answer isn’t in innovation, it’s simply the way of developing the program, because it does not happen with AutoDesk Maya, which makes more crazy things and has a better performance. The way to exploit the PC is the same (so far in the case of the two programs), and depending on this we shoot the programs because we deal with them to work a lot. Thus, some equipment are known as traditional PCs, workstations or servers, not because they’re of another color, but by how they render implementing high consumption programs in graphic design, video processing, application development, server roles and in our case, operation with spatial data.

Less CPU, more GPU

The most outstanding recent change occurred in PC’s architecture is the term coined as GPU (graphics processes unit), which allows you to find a better performance of your computer, converting larger routines in simultaneous smaller ones, without going through the CPU (central process unit) management, whose working capacity is played between the hard disk revolutions, memory, video memory and other peer management tasks (not many others).

Graphics cards, are not made to increase video memory, they themselves include a processor that contains hundreds of cores designed to run parallel processes. This attribute they have always had (more or less), but the current advantage is that these manufacturers offer some open (almost) architecture to enable software developers to consider the existence of a card with these capabilities and exploit its potential. PC Magazine of this month (January) mentions companies as nVidia, ATI and others included within the Alliance OpenCL.

To understand the difference between CPU and GPU, here I mean a simile:

CPU, all centralized, is like a municipality with everything centralized, which has an urban planning, knows that it must control its growth but is unable to monitor even new constructions that are violating the rules. But instead of granting this service to a private enterprise, emphasizes self-attributing the role, population does not know who to complain about the neighbor who is taking the sidewalk, and city is still cluttering every day.

Sorry, I wasn’t speaking about your mayor; I was only speaking about a CPU’s simile, where the central processing unit (for Windows) must do the equipment work on processes such as:

  • Programs that run when Windows starts, like Skype, Yahoo Messenger, Antivirus, Java engine, etc. All consuming part of the working memory at a low priority but unnecessarily unless modified through msconfig (which some ignore).
  • Services that are running, which are part of Windows, common use programs, connected hardware or others that were uninstalled but remain there running. These usually have a medium / high priority.
  • Programs in use, which take up space with high priority. Its execution speed is felt in our liver because we curse if they do not work fast despite having a high performance machine.

And although Windows makes their juggling, practices like having many programs open, install or uninstall irresponsibly, unnecessary themes that looks pretty (Spanish Idiom “pintón”, in a case of a person it means handsome), make ourselves guilty of the equipment malfunction.

It happens then, that when we started one of the mentioned processes at the beginning, processor thinks a lot (Spanish Idiom: “romperse el coco”, break its head) searching how to prioritize this over other programs in use. Their few options to optimize are RAM, video memory (often shared), if a graphics card is on, try to get something from it, depending on the type of hard disk and other trifles, the plaintive cry could be lower.

GPU, parallel processes, it is as if the municipality decides to decentralize concession or privatize those things that are out of reach even if they are major processes that are delivered to small tasks. Thus, based on current legislation is delivered to a private company, the role of monitoring punishable violations specifically. As a result (only an example), citizens may comply with the delicious pleasure of hitting (Spanish Idiom: “contar las costillas”, count the ribs) to the neighbor who gets out the dog to shit on your sidewalk, who builds a wall taking part of the sidewalk, who parked his car improperly, etc. . The company serves the call goes to the site, processes the action takes him to court, runs the fine, half goes to the municipality, the other is a profitable business.

So does the GPU, programs can be designed so as not sending bulk processes in a conventional way, but in parallel like little filtered routines. Oh! Wonderful!

So far, not many programs are making their applications with these features. Most, aspire playing to get 64-bit to solve their slowness problems, although we all know that Mr. Bill Gates will always walk in those capabilities loading unnecessary things about the upcoming versions of Windows. Windows strategy includes take advantage of the GPU by APIs designed to work on DirectX 11, which is sure to be an alternative that all (or most) will accept because they’ll prefer as a standard instead of doing crazy things for each brand outside OpenCL.


The figure shows an example which shows how between 2003 and 2008 nVidia processor via GPU has been revolutionizing its capabilities compared to the Intel CPU processor. Also, here’s the smoked explanation (Spanish Idiom: “fumada”, in this case a hard technical explanation) of the difference.

But the potential of the GPU is there, and hopefully CAD / GIS programs will take all of its capabilities (Spanish Idiom: “sacarle el jugo”). We have already heard, although the most striking case is from Manifold GIS, with CUDA cards of nVidia, in which a process of generating digital terrain model which took over 6 minutes was achieved in just 11 seconds taking advantage of CUDA card. This smoke (Spanish Idiom “fumada”, a very technical ingenious idea) lets them win the Geotech 2008.

In conclusion: we are going by the GPU; sure we will see much from this in the next two years.

Related posts:

Leave a Reply