'Breakthrough' LI-RAM Material Can Store Data With Light (ctvnews.ca) 104
A Vancouver researcher has patented a new material that uses light instead of electricity to store data. An anonymous reader writes: LI-RAM -- that's light induced magnetoresistive random-access memory -- promises supercomputer speeds for your cellphones and laptops, according to Natia Frank, the materials scientist at the University of Victoria who developed the new material as part of an international effort to reduce the heat and power consumption of modern processors. She envisions a world of LI-RAM mobile devices which are faster, thinner, and able to hold much more data -- all while consuming less power and producing less heat.
And best of all, they'd last twice as long on a single charge (while producing almost no heat), according to a report on CTV News, which describes this as "a breakthrough material" that will not only make smartphones faster and more durable, but also more energy-efficient. The University of Victoria calculates that's 10% of the world's electricity is consumed by "information communications technology," so LI-RAM phones could conceivably cut that figure in half.
They also report that the researcher is "working with international electronics manufacturers to optimize and commercialize the technology, and says it could be available on the market in the next 10 years."
And best of all, they'd last twice as long on a single charge (while producing almost no heat), according to a report on CTV News, which describes this as "a breakthrough material" that will not only make smartphones faster and more durable, but also more energy-efficient. The University of Victoria calculates that's 10% of the world's electricity is consumed by "information communications technology," so LI-RAM phones could conceivably cut that figure in half.
They also report that the researcher is "working with international electronics manufacturers to optimize and commercialize the technology, and says it could be available on the market in the next 10 years."
So, the opposite of a dark sucker? (Score:1)
Because we all know there's no such thing as "light".
Re:10 years (Score:4, Informative)
Apparently it wasn'tt obligatory enough for you to make a proper link [xkcd.com]
Re: (Score:3)
Bah, will be late then. In 10 years with nuclear fusion and wireless energy transmission we won't need to be stingy anymore, isn't it?
10 years? We'll have hit the singularity by then and transcended.
Re: 10 years (Score:1)
More like hit the singularity and be extinct lol
Re: (Score:2)
Why? Consider there are numerous forms of life on this planet... it is evident that the more intelligent species can coexist with the lesser.
There is no reason to presume that the singularity would represent humanity's extinction.
Re: (Score:2)
Re: (Score:2)
Nah (Score:5, Interesting)
Maybe it's because I'm turning 50 this year, but I simply don't believe it.
At a certain point I suspect "fantastic claim" fatigue has to set in, where you've heard so many promising concepts but watched the huge majority founder on realities of cost, industrial scaling, or unforseen complications.
The fact that they say it might make it to the market in ten years means it's barely more than a tenuous idea right now, and frankly probably not even worth reporting on. The hyperbolic claims by the inventor make it even less credible, while the nonsensical reporting (implying that such devices would actually run only in light) is idiotic.
Re:Nah (Score:5, Insightful)
The stupid summary leaps to the absurd conclusion that mobiles represent 100% of the power consumption made by ALL IT equipment. I'm pretty sure it's actually more like 1%; perhaps less.
Re:Nah (Score:5, Insightful)
The fact that they mention "smartphones and laptops" as if they're the only computers in existence is a hint that they don't realize that these two groups pale in comparison to desktop computers and servers in terms of power consumption.
Re:Nah (Score:5, Insightful)
Yeah, you tell em! AI is never going to amount to anything, and batteries haven't changed in decades!
Now if you'll excuse me, I'm going to pull out my phone with a 3200 mAh battery the size of a couple-millimeter-thick business card and tell Google Photos to search through my thousands of unlabeled photos to show me just those that contain pictures of an arbitrary object.
Re: (Score:3)
...to show me just those that contain pictures of an arbitrary object.
You don't need very advanced AI to show pictures of arbitrary objects, though.
Re: (Score:2, Informative)
Re: Nah (Score:5, Insightful)
I'm sorry, but the person gave two examples, both of which *have* undergone major advances and made their way into our everyday lives. And if neural nets (what drives the image recognition) aren't from AI research then what are they from?
Re: (Score:2, Informative)
Re: (Score:2)
A "failed concept" that has resulted in numerous breakthroughs [wired.com], such as beating a Go grandmaster with a fraction of the expected computing power.
And I imagine all further AI research will continue to be dismissed by you as "just algorithms" up to and including the day it finally produces an artificial True Scotsman.
Re: (Score:2)
Neural nets perform well when the input data is large enough to be statistically valid. I suspect that the reason we've seen recent advances in the, otherwise well understood, area of neural nets is cheap commodity hardware has made it trivial to build faster cluster computers with *large nerdy number* of RAM.
I wouldn't be surprised to learn that building massive networks of inter-connected neural nets is the next stage.
Re: Nah (Score:5, Insightful)
Right, dead end, which is why they keep becoming better and better at various tasks, to the point that they're now entering our everyday lives? Because surely that's the very definition of a dead-end.
Does anyone remember how terrible voice recognition used to be 1-2 decades ago? Because I sure do. The concept that you could have things like Siri, Google Now, devices like Alexa, etc get it right the vast majority of the time would have been laughable. Neural nets used to be too bad to use in these tasks at all. When Google made their first neural-net based voice recognition system (rather than the algorithmic matching ones from before) it got a 25% error rate. Now it's down to 8%. They're cropping up everywhere - most recently Skype's real-time translation service. Which is starting out a bit imperfect, and I guarantee you, once the neural nets get better with time, people will promptly forget how they didn't used to be as good as they'll have become, just like happens with everything else neural nets do.
On the image side, it's not just about image recognition (say, Google Photos). Facial recognition has gone from fringe to dangerously accurate. In my last job (medical imaging) we used neural nets to segment the brain. Which I find to be a rather amusing concept, artificial neural networks studying biological neural networks ;) They started off rather poor at the job, but by the time I left they were doing a better job at it than humans. Neural nets are also better lipreaders than humans. Really, the number of fields they've been expanding into in the past decade, and the progress over the past decade, is really staggering. One "hard AI" task after the next, they're getting better than humans. Remember this XKCD comic [xkcd.com] from just a few years ago? You can now download software to do just that sort of thing. It's not just about computing power advances, either; the learning algorithms themselves have been advancing by leaps and bounds recently.
Now, if you don't mean neural networks when you say AI, then what the heck do you mean when you say AI?
Re: (Score:2)
Now, if you don't mean neural networks when you say AI, then what the heck do you mean when you say AI?
I don't know about the person you are replying to, but I get really annoyed that when scientists say "AI" they usually mean weak AI, and when newspapers hear it they usually think strong AI [wikipedia.org]. We're not anywhere close to solving the strong AI problem, but some of the algorithms we've come up with along the way are really, really cool.
Re: (Score:2)
when newspapers hear it they usually think strong AI
You shouldn't read newspapers.
Re: Nah (Score:2)
98 was reached in eighties. 98.2 at round 2000
Re: (Score:2)
An awful lot of profitable technology employs that "dead end".
Just because there's no evidence that we're actually any closer to true artificial consciousness, doesn't mean AI research hasn't developed some truly astounding applications of limited domain "intelligence". And we're still only scratching the surface of "neural" networks specifically - they failed to do much of interest in the 60s in large part because they are *very* badly suited to emulation in software and require enormous processing power
Re:Nah (Score:4, Funny)
Image recognition is not AI either.
Correct, AI is that whatever is not yet possible.
Re: (Score:1)
Re: (Score:3)
Nice goal-post moving. Define "intelligence". Keep in mind it may have nothing whatsoever to do with conscious thought.
Perhaps the biggest thing we've learned from AI research is how many "intelligent" skills and behaviors can actually be performed by machines without any apparent shred of human-like intelligence - like say being able to completely trounce Grandmaster Go players at one of the most subtle and complicated games ever created, one that is essentially impervious to the sort of brute-force anal
Re: (Score:2)
No, actually not. That's how they make chess software, and it's effective for such a simple game, but Go has a fan-out on the order of 400 possible moves per turn, decision trees techniques never delivered anything much beyond a mediocre amateur.
Re: (Score:2)
So they spent $100 million to beat people in Go. Color me impressed.
IBM spent $100 million to beat Kasparov in Chess. Now you can beat Kasparov with your smartphone. Same will happen with Go, as both hardware and software get improved.
Re: (Score:2)
Which is why computer programs have already reached the human level.
Re: (Score:2)
Re: (Score:3)
Basically we went down th
Re:Nah (Score:5, Insightful)
It's the technology news/pr machine. I think we've had "breakthroughs" like this for ages, but what we didn't have for most of the time was a relentless, hype-oriented technology "press" that made us aware of them, and also spun them up into the next big thing. They were what they were, quiet little advancements that might or might not ever see the light of the day.
Re: (Score:2)
Maybe it's because I'm turning 50 this year, but I simply don't believe it.
At a certain point I suspect "fantastic claim" fatigue has to set in, where you've heard so many promising concepts but watched the huge majority founder on realities of cost, industrial scaling, or unforseen complications.
The fact that they say it might make it to the market in ten years means it's barely more than a tenuous idea right now, and frankly probably not even worth reporting on. The hyperbolic claims by the inventor make it even less credible, while the nonsensical reporting (implying that such devices would actually run only in light) is idiotic.
Came here to post exactly this, I don't even have anything to add. Well done!
Re:Nah (Score:4, Insightful)
Yeah.... 5 years out for a tech claim means "we have a bit of experimental data that shows something might work. (Please fund me)".
10 years out in the tech world means "this is wild speculation and might never even become a technical demonstration. (Please fund my startup)"
Re:Nah (Score:5, Insightful)
Yeah.... 5 years out for a tech claim means "we have a bit of experimental data that shows something might work. (Please fund me)".
10 years out in the tech world means "this is wild speculation and might never even become a technical demonstration. (Please fund my startup)"
So, what's your solution? For all of the whining and moaning, and hand wringing, it seems that the answer for so many slashdotters is "Jeezuz NO! not another change! Not a breakthrough! Stop reporting on stuff!"
I can always see who works in science/tech - and some times who shouldn't be - by these posts.
Technology does not spring forth fully formed and beautiful, like Venus from the sea. Stuff takes time. In my field, it typically takes 20 years to develop a concept into a finished product. A few I've worked on were 50+ years from someone's concept to end development. Just depends on how far ahead of the curve the ideas go.
It isn't to say that there isn't bullshit. I recall a super radio antenna design from Rutgers that was claimed to be so efficient that the 100 watts pumped into it caused the antenna to melt. Parse that for a bit, and see if you can't come to a completely different conclusion.
Which is all to say, if we have a good reason to believe that a story is bogus, like in the above instance, an antenna melting is the very antithesis of efficient - by all means point it out.
But back to the idea that if there is a time period before deployment of 5 or 10 years, that it is bogus, well, you are applying a metric to technology that is just plain wrong.
A good hint is battery tech. They aren't inventing new elements, and we know what combinations will produce what. For a log time we've known. https://en.wikipedia.org/wiki/... [wikipedia.org] Breakthroughs aren't often breakthroughs. The concepts and lab results very often need years of technological advancements in manufacturing processes to catch up. In the meantime, getting pissed or completely pessimistic about it is kinda a get off my lawn reaction.
Re: (Score:2, Insightful)
So, what's your solution? For all of the whining and moaning, and hand wringing, it seems that the answer for so many slashdotters is "Jeezuz NO! not another change! Not a breakthrough! Stop reporting on stuff!"
My proposal would be to stop reporting on stuff that is 100% fluff, and 0% technical details.
Re: (Score:2)
So, what's your solution? For all of the whining and moaning, and hand wringing, it seems that the answer for so many slashdotters is "Jeezuz NO! not another change! Not a breakthrough! Stop reporting on stuff!"
My proposal would be to stop reporting on stuff that is 100% fluff, and 0% technical details.
This link might give you little more details - http://www.thehindu.com/sci-te... [thehindu.com]
Re: (Score:1)
I agree, Slashdot used to be the place to report these things and then have an intelligent discussion about the ideas. Once and awhile we get one of those, but most of the time it's just a 'your crazy, get out of here' reaction.
Maybe all of this is not true, but I'm sure some of it is and it's an area that we should put money into more research.
Re: (Score:2)
I agree, Slashdot used to be the place to report these things and then have an intelligent discussion about the ideas. Once and awhile we get one of those, but most of the time it's just a 'your crazy, get out of here' reaction.
Maybe all of this is not true, but I'm sure some of it is and it's an area that we should put money into more research.
I do have questions. The presumed power use, has me skeptical. But remembering the old school UV-PROMS, it's no doubt that light can be harnessed to mess with. But follow up and discussion is awesome.
The concept that so many slashdotters that they do not want to even read it, and want it actively suppressed, allows us understanding of how reactionary humans through the ages, have severe problems when encountering truth that does not locksep with their worldview.
Re: (Score:2)
Amen
Re: (Score:3)
Re: (Score:2)
It may be a bit hard for you to notice since so much tech has a 20 year lead time or more from journal article to product but a lot of the stuff we take for granted now was once those "fantastic claims". If you'd shown me white LEDs and the lithium batteries of today in 1980 they would have looked like "fantastic claims" to me.
Re: (Score:3)
"The fact that they say it might make it to the market in ten years means it's barely more than a tenuous idea right now ..."
Yeah, and those dang 'horseless carriages' are another waste of time. What a stupid idea! Noisy stinky unreliable contraption that can't go faster than a mule.
Re: Nah (Score:2)
The old saying is wrong. In reality, extraordinary claims require hyperbolic press releases.
Re: (Score:2)
Re: (Score:1)
I have taken to calling this "Popular Science Fatigue". It's where you read about some wonderful breakthrough technology in Popular Science or some other mass-market source and then it disappears never to be seen again.
Moot. (Score:2)
Indeed. All the "light" hogwash aside, all the proposed benefits are also moot. RAM or memory in general do not consume a great amount of power. Compared to other components, they already consume probably the least amount of power. So should the improvement even be astronomical, in real terms it is moot.
Depending on your device, your consumers of power are going to be your GPU or your CPU, or on something like your phone, your display, by many orders of magnitude over whatever the memory might be using. Whi
Um.... huh? (Score:5, Insightful)
Um, 10% of the world's electricity is not consumed by phones. And even if they actually meant all computing and networking equipment combined, how is a RAM advancement supposed to cut all power consumed by computers, switches, etc in half?
Facepalm.
Overstated, but routers use more RAM than CPU (Score:2)
> how is a RAM advancement supposed to cut all power consumed by computers, switches, etc in half?
Cutting it in HALF is probably overstating it, but all of those devices use RAM, and would benefit from more efficient RAM.
Generally, routers (real routers, as opposed to consumer wireless access points) process 99.9% of the packets with RAM tricks, using almost no cpu. The cpu is mostly there to process commands to the router, such as configuration changes, while packets flowing *through* the router are
Standard RAM cells, more address lines (Score:2)
At the level this research is, storing a bit, CAM is just like bog-standard DRAM. What sets CAM apart is basically more addressing lines. That's external to the memory cells themselves. The other difference of CAM is how the output is interpreted, since you're reading all addresses at once. Again, that's quite external to the individual bit memories.
Routers use ternary CAM to route each packet (Score:2)
Switches use binary CAM to map MAC addresses to interfaces, and for other purposes. Very similarly, routers use ternary CAM to map ip addresses to interfaces, and for other purposes.
When processing packets/frames through the device, switches and routers do essentially the same thing - select the outgoing interface by looking up the destination address in a CAM table. The difference between switches and routers is how those CAM tables are built.
Way overhyped by the media (Score:5, Insightful)
So I did a very quick search on the internet looking for Light induced RAM and Light induced magnetoresistance and only found one article that predates the slashdot article and the one it links to. (Ok, I'm procrastinating from doing other stuff).
http://www.uvic.ca/home/about/... [www.uvic.ca]
This university published article is just as short on details and has no links to any published research. It's also a bit laughable: "new material allows computer chips to exist at a molecular level" which means what exactly? Computer chips currently don't exist at the molecular level? Anyway, don't mean to give their communications department a hard time, I just want more solid info.
It's clear that some of the claims from the hyped article that slashdot cites are ridiculous (at least the university release doesn't make those claims). The journalists, lacking any background in science probably called up some "experts" and said (out of context) "if you had a material that could do such and such" what would be the advantages. So, these experts, whether or not they actually know anything, just started making things up like it'll cut down on energy consumption (true but not a huge amount) and that it would prevent fires like the Samsung smart phone (probably not because the modest power savings from this RAM would not allow the battery to be designed differently which was the cause of the fires).
Unfortunately, the heat (and power) problems are not in the RAM but in the processor (amongst other things) which this technology does not address. In the university article they say that it is part of an effort to reduce the power and heat of processors but does not say this technology does this. Apparently, from the article, it is only suitable for RAM; hence the name LI-RAM. So while it may be faster (good) and not give off much heat (also good) it doesn't live up to the hype in the distorted media interpretations of the university article (which the slashdot submitter then chopped up and republished). This all assumes that they can get this to work at the fantastic performance and density levels of modern RAM all while not introducing new sources of heat and power to make it work (it requires "green light' presumably from a laser).
Anyway, if you want to waste some time, take a look at the Slashdot link and then look at the university article and you'll see how information can be mangled and hyped up by people who don't have a background in the subject. Of course, since we all like "free" (or ad supported) news, we aren't exactly encouraging accurate journalism :(
Indeed, this is not yet more than an idea... (Score:2)
"The objective of this research is to explore new classes of compounds that exhibit multifunctional magnetic properties of fundamental importance to high-density storage methods and molecular electronics. The scope of the proposed projects cover a broad range of fundamental topics in chemistry, including organic and inorganic synthesis, structure
Re: (Score:3)
Because I am way too busy filing a patent for "a method of blending unicorn poo with fairy dust, with the potential to operate on an industrial scale at a lower cost that some existing methods".
What is your excuse? <Slashpoll required here >
Re: (Score:2)
Theoretically, you could, and an optical CPU would almost certainly be faster and cooler than an electronic CPU. Unlike what the article says, though, it would definitely not be smaller. There's a lower limit to how small you can make something, and still have light pass through it, and for integrated circuits, we've exceeded that limit by around an order of magnitude.
The UVic prof and article (Score:2)
Obligatory xkcd (Score:1)
Re: (Score:2)
Actually, strictly speaking today's cellphone is not supercomputer class of 12 years ago, in fact not even really 20 years ago, but that's sane since optimizing for the measures of a supercomputer would have no relevance to anything a person does on a mobile device, and rarely even has relevance for a desktop. If we grossly oversimplify, a 20 year old supercomputer class system is about 10 times as powerful as a modern flagship phone.
But no amount of super fast memory technology will overcome that, and tru
Tech analyst? (Score:2)
FTFA:
Wow. Just how easy is it to get a job as a tech analyst at CTV? This seems to be attributed to CTV tech analyst Carmi Levy. Was it her or the author that screwed this up?
Re: (Score:2)
They are going to replace batteries with RAM, that's how amazingly breakthrough this RAM is.
Re: (Score:2)
They are going to replace batteries with RAM, that's how amazingly breakthrough this RAM is.
Perhaps there will be solar cells within the RAM that provide electricity for the phone. To recharge your phone you just shine a flashlight into the USB port and it traps all the light inside...
And the greatest part of all- you can charge that flashlight from your phone.
Increased efficiency does not reduce consumption (Score:2)
It increases usage. Jevons Paradox [wikipedia.org]
As a simple example, do you think bitcoin miners are going to pocket the savings, or expand operations to leverage the savings?
Re: (Score:2)
I think the guy/organization in China who effectively owns the Bitcoin network will upgrade just enough to ensure they remains in control, while not owning the network obviously and entirely to reduce the cultist's confidence in it to the point he can't keep extracting wealth from the US.
It's not about maximizing Bitcoin mining, it's about maximizing net profits and keeping the scam going for as long as possible.
I don't get it (Score:2)
"devices which are faster, thinner..."
I guess they will continue to make them thinner until they cut.
An we'll just use bigger and bigger cases.
What's the difference between this and... (Score:1)
NOT A GOD DAMN PATENT (Score:1)
Re: (Score:2)
Supercomputer speeds (Score:2)
Uhh, my phone is easily as powerful as a Cray, so.... already there!
"..in the next 10 years" (Score:2)
Light-induced chemical and biological memory (Score:1)
Light-induced chemical memory: photographic films/papers, typically subject to fading but it can be "fixed" to last decades or longer.
Light-induced biological read-only memory, very short-term/fades fast if not refreshed: photoreceptors in the eyes
Light-induced biological read-only memory, fades after a few days or weeks if not refreshed: sunburn/tan-lines