How Google Researchers Used Neural Networks To Make Weather Forecasts (arstechnica.com) 45
A research team at Google has developed a deep neural network that can make fast, detailed rainfall forecasts. Google says that its forecasts are more accurate than conventional weather forecasts, at least for time periods under six hours. Ars Technica reports: The researchers say their results are a dramatic improvement over previous techniques in two key ways. One is speed. Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a weather forecast an hour in the future. By contrast, Google says its system can produce results in less than 10 minutes -- including the time to collect data from sensors around the United States. A second advantage: higher spatial resolution. Google's system breaks the United States down into squares 1km on a side. Google notes that in conventional systems, by contrast, "computational demands limit the spatial resolution to about 5 kilometers."
Interestingly, Google's model is "physics-free": it isn't based on any a priori knowledge of atmospheric physics. The software doesn't try to simulate atmospheric variables like pressure, temperature, or humidity. Instead, it treats precipitation maps as images and tries to predict the next few images in the series based on previous snapshots. It does this using convolutional neural networks, the same technology that allows computers to correctly label images. Specifically, it uses a popular neural network architecture called a U-Net that was first developed for diagnosing medical images. The U-net has several layers that downsample an image from its initial 256-by-256 shape, producing a lower-resolution image where each "pixel" represents a larger region of the original image. Google doesn't explain the exact parameters, but a typical U-Net might convert a 256-by-256 grid to a 128-by-128 grid, then convert that to a 64-by-64 grid, and finally a 32-by-32 grid. While the number of pixels is declining, the number of "channels" -- variables that capture data about each pixel -- is growing.
The second half of the U-Net then upsamples this compact representation -- converting back to 64, 128, and finally 256-pixel representations. At each step, the network copies over the data from the corresponding downsampling step. The practical effect is that the final layer of the network has both the original full-resolution image and summary data reflecting high-level features inferred by the neural network. To produce a weather forecast, the network takes an hour's worth of previous precipitation maps as inputs. Each map is a "channel" in the input image, just as a conventional image has red, blue, and green channels. The network then tries to output a series of precipitation maps reflecting the precipitation over the next hour. Like any neural network, this one is trained with past real-world examples. After repeating this process millions of times, the network gets pretty good at approximating future precipitation patterns for data it hasn't seen before.
Interestingly, Google's model is "physics-free": it isn't based on any a priori knowledge of atmospheric physics. The software doesn't try to simulate atmospheric variables like pressure, temperature, or humidity. Instead, it treats precipitation maps as images and tries to predict the next few images in the series based on previous snapshots. It does this using convolutional neural networks, the same technology that allows computers to correctly label images. Specifically, it uses a popular neural network architecture called a U-Net that was first developed for diagnosing medical images. The U-net has several layers that downsample an image from its initial 256-by-256 shape, producing a lower-resolution image where each "pixel" represents a larger region of the original image. Google doesn't explain the exact parameters, but a typical U-Net might convert a 256-by-256 grid to a 128-by-128 grid, then convert that to a 64-by-64 grid, and finally a 32-by-32 grid. While the number of pixels is declining, the number of "channels" -- variables that capture data about each pixel -- is growing.
The second half of the U-Net then upsamples this compact representation -- converting back to 64, 128, and finally 256-pixel representations. At each step, the network copies over the data from the corresponding downsampling step. The practical effect is that the final layer of the network has both the original full-resolution image and summary data reflecting high-level features inferred by the neural network. To produce a weather forecast, the network takes an hour's worth of previous precipitation maps as inputs. Each map is a "channel" in the input image, just as a conventional image has red, blue, and green channels. The network then tries to output a series of precipitation maps reflecting the precipitation over the next hour. Like any neural network, this one is trained with past real-world examples. After repeating this process millions of times, the network gets pretty good at approximating future precipitation patterns for data it hasn't seen before.
Blackuweather forecast (Score:2)
Thank you Ollie...
Re: (Score:2)
short range and depends on inputs (Score:2)
this is basically deciding on if a pixel is a certain colour or not
it depends on drawing a colour map its going to be short range this can not predict 3 months ahead... based on the other factors such as weather events around the globe...
I wish they had actually come up with a long range model, we could do with a new long range model for the world...
Re: (Score:2)
this is basically deciding on if a pixel is a certain colour or not
Shit, Google made a video game!
Re: (Score:2)
That's what I thought, like that life simulator with gliders and stuff.
I also wonder if it isn't some momentum change kind of thing in basic math.
Or simply momentum period. (Score:1)
It sounds like it's a topological momentum, probably sprinkled with something akin to an asymptotic "Divergent Series" black-magic voodoo, which will give fairly accurate predictions in the VERY NEAR future, but which would blow up into nonsense if pushed into merely the near future.
tl;dr == After a few iterations WITHOUT the radar colored maps to rely upon, the algorithm is likely worthless.
[Otherwise chaotic turbulence wouldn
Re: (Score:1)
3 months ahead? Our current best weather forecasts can not predict a day in advance. How much is it going to snow tomorrow?
Re: (Score:3)
Sure they can predict the weather a day in advance. Those predictions are just limited in *precision*, both in time and space.
Consider Chicago, a vast city with no geographic barriers to weather systems (e.g. surrounding mountains). You can say with pretty high certainty whether it will, or will not rain *somewhere* in the 234 square miles covered by the city some time tomorrow, but you can't always say with much certainty that it will be raining at the northern end of Millennium Park at 1:03 in the aft
Re: (Score:2)
Why all the Google stories today? (Score:2)
Is there some event going on right now? I thought IO wasn't until spring/summer; but Google's PR arm seems to be in overdrive right now.
Re: (Score:1)
Re: (Score:3)
Follow the money. Did an officer announce plans to sell 1% or more of his stock?
Re: (Score:2)
Re: Science. (Score:1)
Just because perfect prediction in a chaotic system isn't possible doesn't mean science does have a solid grasp of the principles in operation.
Local? Few hours? Meh. (Score:4, Insightful)
Anyone who has lived in an area for a while can predict if it is going to rain within the next few hours with almost 100% certainty. Another Goodle "achievement" on par with their "quantum computer algorithm", which was laughed out by the experts in the field.
It is expected that being good in one field (e.g. spying on what people do on the Internet) does not mean that you know much about anything else. About time those spyware enablers get it too.
Re: (Score:2)
The concept of "prediction" that is based on past images sounds like a great, novel idea, too, although some dumber people might suspect it is the reason why every stock brokerage report has this line about "past perfomance does not guarantee future returns".
Re: (Score:2)
Re: (Score:2)
Reminds me of my old Uncle Herman. When I was a boy I was mowing the lawn one day and Uncle Herman came along, looked up at the one cloud in the sky and told me I had about 30 minutes to finish. And he was right. Uncle Herman had no idea of physics, atmospheric science or any of that. He was a farmer and got his forecasting skills the same way this machine has, by observation and experience. Only difference is that it took Uncle Herman decades in real time to figure it out and this machine somewhat shor
Re: (Score:3)
He was a farmer and got his forecasting skills the same way this machine has, by observation and experience.
The machine doesn't "observe and experience", it is estimating coefficients in a predefined model or two. If there is something new, or something with a low frequency, it will just ignore it. Your uncle probably won't.
Re: (Score:2)
He was a farmer and got his forecasting skills the same way this machine has, by observation and experience.
The machine doesn't "observe and experience", it is estimating coefficients in a predefined model or two.
Which is what Uncle Herman's synaptic network was doing, too. Except the coefficients were represented differently.
Humans are capable of higher-order thinking that machines are not (yet). Uncle Herman was capable of abstracting models out of his experience, deliberately identifying key features, categorizing them and formulating rules about local weather, while this machine system is not. But that's almost certainly not what he did. What he did was inferential pattern-matching. I'm sure he couldn't re
Re: (Score:2)
Re: Local? Few hours? Meh. (Score:2)
Yes, they can. Iâ(TM)ve been using an app for a few years that extrapolates the national weather service data like this and itâ(TM)s crazy accurate at predicting precipitation start/stop times, (and revising those as the data progresses). By the time itâ(TM)s 2 hours out, the app is usually accurate to within 5 minutes.
Re: (Score:2)
Re: (Score:2)
Yes. But they do it without teaching anything about what a doppler radar is and making equations describing the movement of weather fronts depending on temperature and pressure. That's why they can run it in 10 minutes instead of hours.
Re: (Score:2)
The weather feed includes a prediction to two hours out now in radar.
You can tell when they shift algorithms when it starts deviating differently at the tail end.
Re: (Score:2)
crazy accurate at predicting precipitation start/stop times, (and revising those as the data progresses).
This is what any more or less good weather service has been doing for years without buzzwords like "AI" and "neural networks". Scraping this data and reposting it on their website is quite on par with their "quantum computing", like I already said.
Re: Local? Few hours? Meh. (Score:2)
Fantastic. Those years of building up my awareness of local microclimate patterns will really come in handy for all my traveling.
Neural ??? (Score:2)
It seems to me that they are just using fancy words to try to impress people who have little understanding.
Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. [pathmind.com]
"Loosely" seems to be the key word.
To me, "a sophisticated computer program designed to recognize patterns" would be better and more honest.
Re: (Score:1)
Re: (Score:3)
Try a better wiki [wikipedia.org]. They're a matrix of nodes with weighted connections to neighbouring nodes - very much like neurons and synapses, at least to a first approximation. And like real neurons, their weighting can be adjusted by their input to better produce the desired output, so they can "learn" too.
Neural networks are a specific type of "computer program to recognise patterns", as there are lots of different kinds [wikipedia.org] of those.
"... vaguely inspired..." (Score:2)
"Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains."
"... vaguely inspired..."
It's vague, not good communication.
Re: (Score:1)
It's just a description of what it is - a network of artificial neurons, elementary units that work somewhat like biological neurons. The neurons are very simple in terms of code, and complex behavior emerges from connecting a large number of them together. ANNs are not programmed in a traditional sense, to solve discrete tasks in a specific way, so calling it a "sophisticated computer program designed to do something" is not really correct at all. The way it creates the forecast in this instance is a learn
My AI always says: (Score:2)
The weather will get better or worse or it will stay as it is.
Our system can predict rain within a 6 hour window (Score:3, Funny)
Re: (Score:2)
The run time is less than 10 minutes, according to the summary.