Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Earth Science Technology

NOAA Goes Live With New Forecasting Supercomputers 53

dcblogs writes "The National Oceanic and Atmospheric Administration (NOAA) Thursday switched on two new supercomputers that are expected to improve weather forecasting. The supercomputers are each 213 teraflops systems, running a Linux operating system on Intel processors. The U.S. is paying about $20 million a year to operate the leased systems. The NWS has a new hurricane model, Hurricane Weather Research and Forecasting (HWRF), which is 15% more accurate in day five of a forecast both for forecast track and intensity. That model is now operational and running on the new systems. In nine month, NWS expects to improve the resolution of the system from 27 kilometers to 13 kilometers. The European system, credited with doing a better job at predicting Sandy's path, is at 16 kilometers resolution. In June, the European forecasting agency said it had a deal to buy Cray systems capable of petascale performance."
This discussion has been archived. No new comments can be posted.

NOAA Goes Live With New Forecasting Supercomputers

Comments Filter:
  • by Bud Light Lime ( 2796025 ) on Friday July 26, 2013 @08:23AM (#44389847)
    Larger atmospheric features such as air masses and mid-latitude cyclones are more predictable than smaller features. Thunderstorms are much smaller and less predictable. Also, thunderstorms are driven by instability in the atmosphere. That is, if air is nudged upward, it will accelerate upwards. This occurs when warm (or hot) moist air is beneath cold air aloft. If there's a lot of cloud cover left over from thunderstorms the previous day, for example, that makes predicting thunderstorm chances the next day much more difficult. Predicting the behavior of large air masses is done with much more skill than smaller features such as thunderstorms.
  • by hwrfboy ( 2997951 ) on Friday July 26, 2013 @09:26AM (#44390203)

    I'm actually an HWRF developer and you are correct that the summary was wrong. Our innermost domain is 3km, at a size of around 600x600km, intended to resolve the storm's inner core region (the area with the dangerous winds and, typically, largest rainfall). It is within a larger 1100x1100km 9km resolution domain, for resolving the nearby environment, and there is a gigantic 7500x7500 km, 27km resolution domain to resolve large-scale systems that drive the track. Also, the 3km resolution is not just needed to resolve convection: you need it to resolve some of the processes involved in intensity change, and in concentration of the wind maximum, such as double eyewalls, mesovortices, hot towers, and vorticity sheets. The GFS is our boundary condition, and part of our initial condition. We tried using ECMWF instead as an experiment, but that causes mixed results on track, and worse intensity. The intensity issues are likely due to their model's lack of skill at intensity prediction and primitive ocean model. (GFS has better hurricane intensity than ECMWF, despite having lower resolution!) ECMWF also has completely different physics and dynamics that ours, which results in larger shocks at the boundary.

    You can see a better description of our model on our website:

    http://www.emc.ncep.noaa.gov/index.php?branch=HWRF [noaa.gov]

    and if you're interested in running HWRF yourself, you can do that too, though it will be another week or two before the new 2013 version is publically available. HWRF is an open source model, put out by the NOAA Developmental Testbed Center (DTC), which handles the public distribution and community support. (Support of HWRF installations in other countries' forecast centers is generally handled through the NOAA Environmental Modeling Center (EMC).) Here is the webpage for user support and downloads:

    http://www.dtcenter.org/HurrWRF/users/overview/hwrf_overview.php [dtcenter.org]

    As for your point about improved resolution not helping the GFS, that's not true, especially in the case of hurricanes. The resolution of the GFS (~27km) is so low that it cannot even resolve the structure of most storms, let alone see the complex features involved in predicting intensity, rainfall or the finer points of track. When it can resolve the storm, such as with Superstorm Sandy, it has intensity skill competitive with regional models. The upcoming GFS upgrades to 13km and later 9km resolution (~2-4 years away) will allow the model to get a good idea of the basic structure of the storm, and start having real skill at predicting intensity, even for smaller storms. That, in turn, will help the HWRF and GFDL regional hurricane models improve their track and intensity prediction since they both rely on GFS for initial and boundary conditions.

  • Re:Beaches (Score:5, Informative)

    by hwrfboy ( 2997951 ) on Friday July 26, 2013 @09:43AM (#44390355)

    The summary is confusing two different models: HWRF and GFS. The HWRF model is a public model you can download and run, as long as you have ~20 GB of RAM free on your computer:

    http://www.dtcenter.org/HurrWRF/users/overview/hwrf_overview.php [dtcenter.org]

    There is a public version of the GFS, but I'm not sure where. I'm mainly an HWRF developer.

    Also, you can download GFS and HWRF forecasts in real-time (ie.: files less than 10 minutes after they're created by the operational NCEP WCOSS supercomputer) here:

    GFS: ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.*/ [noaa.gov]

    You want the files named gfs.t??z.pgrb2f* - those are the forecast files every 1-6 hours at 0.5 degree resolution.

    The HWRF real-time data is here:

    HWRF: ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/hur/prod/hwrf.*/ [noaa.gov]

    The *.hwrfprs_* files contain model fields. The *prs_n* are the 3km domain, prs_m are combined 9&3km, prs_p are 27km, prs_i is 9km and prs_c are combined 27:9:3km. The track files are *.atcfunix for six-hourly, *.3hourly for three-hourly and *.htcf for experimental per-timestep (5 second) information.

    You can also get archived track files from a three season retrospective test of the GFS and various HWRF configurations here:

    http://www.emc.ncep.noaa.gov/HWRF/tracks/ [noaa.gov]

    Formats of the track files contained within are described well on JTWC's website (the equivalent of the NHC for everything not near mainland US):

    http://www.usno.navy.mil/NOOC/nmfc-ph/RSS/jtwc/best_tracks/

  • Re:power (Score:4, Informative)

    by hwrfboy ( 2997951 ) on Friday July 26, 2013 @10:56AM (#44391051)

    The best way to predict the weather is to control it, obviously.

    Actually, that was attempted, and aborted due to diplomatic reasons. NOAA tried cloud seeding experiments in the 1960s-1970s attempting to weaken or destroy a tropical cyclone when it is out to sea. Unfortunately, the experiment usually failed, and occasionally the surviving hurricanes made landfall and did significant damage. When that happened, some countries suspected that the US was doing this secretly to develop weather weapons, so the project was shut down in the early 1980s to avoid the resulting public outcry and diplomatic incidents. Why should congress keep funding a failed experiment that causes diplomatic problems? You can read about this here:

    http://www.aoml.noaa.gov/hrd/hrd_sub/stormfury_era.html [noaa.gov]

    and Wikipedia has a good page with a lot more information:

    http://en.wikipedia.org/wiki/Project_Stormfury [wikipedia.org]

    On a positive note, the project contributed to the formation of the present-day AOML Hurricane Research Division, which now has the invaluable Hurricane Hunter aircraft, as well as some hurricane modeling experts. They contribued a lot in the past few years to callibrating the HWRF model physics and dynamics to observations.

  • Re:$20M/year? (Score:3, Informative)

    by hwrfboy ( 2997951 ) on Friday July 26, 2013 @02:27PM (#44393213)

    Actually, the high cost per year is because there are several stages of planned upgrades, intended to support the steady increase in resolution and data assimilation capacity of the various models. (Including a massive GFS upgrade next year.) The project, from the NCEP side at least, was completed five weeks early and under budget. The estimated savings, from shutting down the old overpriced Power6/AIX CCS cluster early, is about $1 million, and the switch to Intel/Linux will save taxpayer dollars in the long term. I know that's small compared to the national debt, but it isn't the usual government waste that you hear about, and I'm proud to say we're doing our part (even if most of the government isn't).

    As for who is getting the money, have you ever heard the old adage "nobody ever got fired for buying IBM?" While this is somewhat of a Stockholm syndrome situation, I'm told IBM did manage to underbid everyone else this time, and the cluster mostly working, five weeks early. (Completely working would have bene nice, but you get what you pay for.) We've used creativity to work around the problems and get everything working with the cluster they gave us.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...