Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

High-performance Computing May Improve Combustion Efficiency

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Environment/Energy Donate to DU
 
phantom power Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Sep-11-05 07:12 PM
Original message
High-performance Computing May Improve Combustion Efficiency
Rising oil prices have revved momentum to develop more efficient combustion systems. But instrumental to this goal is a need to achieve greater understanding of the complex chemical reactions involved in combustion processes.

In one of the largest simulations ever brought to bear on this problem, researchers at Pacific Northwest National Laboratory performed quantum chemical calculations to accurately predict the heat of formation of octane, a key component of gasoline.

The calculation-performed using 1,400 parallel processors-took only 23 hours to complete and achieved a sustained efficiency of 75 percent, compared to the 5 to 10 percent efficiency of most codes. For comparison, the best one-processor desktop computer would have required a three and a half years and 2.5 terabytes of memory to run the calculation.

These pioneering calculations also helped identify the level of theory needed for subsequent efforts to reliably predict the heat of formation of larger alkanes in diesel fuel, for which there is very little experimental data, and the heat of formation of key reactive intermediates, such as alkyl and alkoxy radicals, for which there is no experimental data.

http://www.sciencedaily.com/releases/2005/09/050911110208.htm
Printer Friendly | Permalink |  | Top
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Sep-11-05 07:29 PM
Response to Original message
1. WOW - way cool!
:-)
Printer Friendly | Permalink |  | Top
 
RC Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Sep-11-05 07:50 PM
Response to Original message
2. This is some neat stuff, but why do I get the impression that
this is too little too late. We need alternative energy sources. Apply this to that.
Printer Friendly | Permalink |  | Top
 
phantom power Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Sep-12-05 09:45 AM
Response to Reply #2
5. If they extend the study to diesel, that would probably be valuable.
Since diesel is the main component in many bio-diesel based sustainable fuel schemes.
Printer Friendly | Permalink |  | Top
 
bemildred Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Sep-11-05 08:30 PM
Response to Original message
3. Must have been some great parallel programming
to get that sort of efficiency at that scale.
:thumbsup:
Printer Friendly | Permalink |  | Top
 
phantom power Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Sep-12-05 09:43 AM
Response to Reply #3
4. Parallel efficiency is usually governed by two things...
1) The nature of the dependency lattice. If the computation can be broken into many, many pieces that all take about the same amount of time to compute, then you can get high efficiency. If a few of the pieces take much longer to compute, then most of the processors end up burning cycle just waiting for those few to finish.

2) The amount of data that must be shared between processing nodes. If processors have to share lots of data between themselves as they run the computations, that generally reduces efficiency, unless you have designed tons of bus bandwidth between the processors. That kind of parallel system is expensive

Based on my almost infinitely inadequate knowledge of quantum mechanics, I would guess that this was run as a monte carlo study: each processor ran a different version of the same simulation, to simulate the distribution of quantum possibilities for the reaction. Pretty much like they do weather predictions. They run many simulations starting from different boundary conditions, and do some kind of averaging to give a "prediction".

At any rate, a Monte Carlo study would be almost ideal for efficiency: each processor would run it's own independent simulation, and they would probably all take about the same time.
Printer Friendly | Permalink |  | Top
 
bemildred Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Sep-12-05 12:34 PM
Response to Reply #4
6. Yep.
"inherent serial component" and "communications overhead".

But 1400 is a lot of nodes, it has to be done right to get that
sort of speedup (1400 * .75 == 1050, three orders of magnitude),
even with what must have been a highly parallelizable problem.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Dec 26th 2024, 10:14 PM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Environment/Energy Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC