Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

Eye-Catching Advances in Some AI Fields Are Not Real (sciencemag.org) 74

silverjacket writes: A story in this week's issue of Science. Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim -- and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks -- software architectures that loosely mimic the brain. "Fifty papers in," he says, "it became clear that it wasn't obvious what the state of the art even was." The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly -- and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result [PDF], presented in March at the Machine Learning and Systems conference, surprised Blalock's Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. "It's the old saw, right?" Guttag said. "If you can't measure something, it's hard to make it better."

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the "high-water mark ... was actually set in 2009." Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing "phantom progress" in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers' claims, accuracy had not improved since 2006. "There's always been these waves of hype," Musgrave says.
This discussion has been archived. No new comments can be posted.

Eye-Catching Advances in Some AI Fields Are Not Real

Comments Filter:
  • Right. Just like I thought.
    • by AleRunner ( 4556245 ) on Saturday May 30, 2020 @01:46AM (#60124140)

      Right. Just like I thought.

      We shouldn't get this wrong. The AI Hype cycle is so well known that there are standard terms like AI winter [wikipedia.org] for parts of it. But never before has the cause of the hype/fail been academic fraud, incompetence and failure to measure. The expert systems people believed they were delivering something that would eventually be useful and had results which supported them. Their results were reasonably open and reproducible. Having produced nothing at all whilst pretending to have great achievements is a new 21st century embarrassment..

      • by Rick Schumann ( 4662797 ) on Saturday May 30, 2020 @02:06AM (#60124182) Journal
        Here's my take: companies like Alphabet and it's subsidiaries invested countless millions trying to develop AI for things like driverless cars. They apparently thought it was going to be Just Another Development Cycle, after which they mass produce, or license the tech, or whatever they planned -- only to find they can't get it across the finish line, because we have no idea how 'intelligence', 'reasoning', or 'thinking' actually works. So just like the Bridge in classic Zork, you can get halfway across the distance left to the other side, but you never, ever get 100% across the Bridge. So what do they do, to prevent their investors and stockholders from turning into a lynch mob? They turn it over to their marketing department, who hype the living hell out of it, talk it up with the media, who don't know anything more than what they're being told to start with, so they hype it up, and before long the average person thinks they're going to have an I, Robot android in their house, and K.I.T.T. from Knight Rider in their garage. Meanwhile the reality couldn't be farther from the truth, and all these little 'AI' companies spring up, some thinking they're the smartest guys in the room, some just out to take VCs for everything they can get, and before you know it the whole world is worrying about losing all their jobs to robots, and our future AI overlords ruling the puny humans -- and like the self-driving cars, none of it ever gets across the finish line, because the so-called 'technology' is stagnant for lack of understanding of how our own brains even work on a full-system-wide basis.

        I've talked to people who are in neuroscience. They tell me we really have no idea how a brain does what it does, and we're not likely to know for quite some time to come. I've also talked to people in software engineering. They tell me that the difference between the so-called 'AI' they keep trotting out today, and the so-called 'AI' they had 20 years ago, isn't much at all on the machine code level, it's just bigger, faster hardware for it to run on.

        None of the above seems to make a dent in too many peoples' belief that all the hype is real, though.
        If you look back over the last 100 years, you see in literature and media fantasy visions of machines that think and act like humans, or perhaps alien minds, but still minds. We humans take for granted the innate ability to think and reason that we've evolved over millions of years, and we take it for granted to the point where some people believe it can't be that hard to build a machine that does the same thing. They couldn't be more wrong though.
        • by wbcr ( 6342592 ) on Saturday May 30, 2020 @04:56AM (#60124358)
          I concur there is a grain of truth in what you said. But there is also undeniable progress among the hype, mostly in the field of computer vision, NLP. 20 years ago, we were not able to train a 100 layer network and the progress is not just thanks to more powerful hardware but also better network architectures and adaptive training algorithms/procedures (to manage exploding/vanishing gradients). Definitely there is more knowledge on designing loss functions and all sorts of clever tricks how to define the problem we are trying to solve as a function minimization problem (the new term being 'differentiable programming'). Maybe we have not done a significant step towards AGI, but in the field of pattern matching there was some significant progress.
          • You're not wrong, but as it currently stands an amoeba is still smarter overall than most so-called 'AI'. 'Deep learning algorithms', 'Neural networks', and so on, are only the tip of the AI iceberg, and where they fall short, they fall short significantly. We really need to understand how a living brain works as a system, not just a couple of the sub-systems.
            What I think we need to focus on to get us there is the technology to scan a living, working brain at the resolution and sampling rate necessary to a
        • by Anonymous Coward

          > > because the so-called 'technology' is stagnant for lack of understanding of how our own brains even work on a full-system-wide basis.

          Oh, it's not completely stagnant. Computer vision doesn't work like human vision, it's pixel based rather than edge based, and that was documented by Jerry Lettvin with frog retinas in the 1960's. They just refuse to follow the analog, organic, adaptive structures used by real brains. They're quite insistent that by applying enough computing power, then can unmix the

          • But it's been modest improvements for the last 15 years or more, rather than quantum improvements.

            But...that's good, right? Quantum improvements would have been undesirable.

            a lot of ratholes of object oriented layers of abstraction which natural systems violate almost deliberately

            I wouldn't go as far. But they still haven't advanced beyond Sussman's "variety of inadequate ontological theories". It's those that natural systems regularly break.

          • It might not even be possible to produce a working brain analog using digital electronics. We may end up having to grow biological brains-to-order to have real AI, for all I know. It may just be too complex to do in hardware. After all, they're trying to side-step a couple million years of evolution into a few decades.
        • by dvice ( 6309704 ) on Saturday May 30, 2020 @10:08AM (#60124946)

          > Alphabet and it's subsidiaries invested countless millions trying to develop AI for things like driverless cars.

          That is somewhat true. Interesting fact is that deepmind is not directly involved with the driverless car project, but they helped them a while ago by using reinforced learning and evolutionary selection to reduce false positives by 24% compared to hand tuned parameters [1].

          Waymo project itself is not trying to develop AI much, they are just focusing on training it. Deepmind on the other hand had the goal to improve AI so that they can beat humans in old video games, then Go (a decade before it was predicted) and then Starcarft. Deepmind was also investigating some interesting areas like long term memory also, but no further information came from those projects so those trails possibly failed.

          After Starcraft they said that they are done with games and they are now focusing on health care section.

          In health care they encountered paper documents, fax machines etc. so making good results requires some ground work to get the data into same format and into the computers. But they have already had some good results predicting problems from data.

          So at the moment it looks like Deepmind is not focused on creating GAI, but instead they are focusing on implementing real world solutions with the current technology. I don't have any information why it is so. Perhaps they wanted to have a meaningful project or perhaps they hit the wall and saw no way over it. If the reason is latter, it is rather strange, because they didn't really try that long.

          Changes in the AI technology are mostly on hardware. I'm not sure how old invention is using evolution algorithms with reinforced learning to train the AI. E.g. the original AI solution for checkers game in 1955 did not include evolution algorithm so it was missing the critical peace of getting better [2].

          The latest improvements in AI in my opinion are:
          - Hardware (making it possible to have better than human image recognition)
          - Combination of evolution&reinforced, creating a lot of interesting projects like team play bots [3]
          - Publishing and open sourcing AI libraries, which has made it possible for small projects to use AI in everyday problems. Like sorting cucumbers [4].

          I don't think that we will see any new improvements anytime soon, but we will see a lot of new implementations.

          [1] https://deepmind.com/blog/arti... [deepmind.com]
          [2] http://incompleteideas.net/boo... [incompleteideas.net]
          [3] https://www.youtube.com/watch?... [youtube.com]
          [4] https://medium.com/swlh/diy-ai... [medium.com]

          • So at the moment it looks like Deepmind is not focused on creating GAI, but instead they are focusing on implementing real world solutions with the current technology. I don't have any information why it is so.

            My take on that? Back to money, and investors, and stockholders. If you can't show a profit, can't show something marketable, then they're all going to turn on you, show you the door, tell you to not let it hit you in the ass on the way out, get someone in your place who can make a marketable, profitable product. So they take something half-finished and try to make something from it they can sell.

        • they can't get it across the finish line, because we have no idea how 'intelligence', 'reasoning', or 'thinking' actually works.

          Yes, so far the self-driving car has failed. But the reason you gave is not the reason why it failed, even though the reason you gave is a true statement in-and-of itself.

          To put it plainly: the self-driving car is an attempt at making a non-intelligent thing perform a complex task. It is not, and was never, an attempt at making a car intelligent.

          Remember once-upon-a-time when onl

          • Yes, decades ago when I was a teenager I was able to wire a logix computer (which is a bunch of multiple pole switches) to play and win a game of tic-tac-toe. It did not even have electronics yet it could beat half the students at school that played it. Lots of things don't need intelligence even while most people think you have to be smart to do the job.
          • To put it plainly: the self-driving car is an attempt at making a non-intelligent thing perform a complex task. It is not, and was never, an attempt at making a car intelligent.

            I hear you, but what I'm saying is: you need it to be intelligent in a human way, to properly execute this very-much-human task of operating a vehicle in a human environment on roads made by-and-for humans.

            ..chess..

            Building software to play chess is closer to building software to manage all the elevators in a huge building with dozens of elevators, or perhaps something like a network of rail lines with automated locomotives or something like that, than it is trying to build software to operate a box on wheels that

        • Sure there is a lot of hype, but the development of the algorithm to train a deep neural network (as opposed to evolving it) was a significant fairly recent advancement and has resulted in systems that outperform the previous state of the art (e.g. on the standardized vision recognition set).
        • The cerebral cortex is, as far as the humans know, the most advanced piece of hardware in the universe. Artificial Intelligence? No need here. This is actual Intelligence (it's worth pointing out that we don't really have a firm grasp on what intelligence itself it). And the human is sitting there with that technology built into them, they're USING it too! But to my way of thinking, it's like using a 500T crane to build a 1T crane. Why are you building something that, in the end, will amount to less t

          • This is actual Intelligence (it's worth pointing out that we don't really have a firm grasp on what intelligence itself it).

            That's at the core of my entire point, really: we don't understand 'intelligence' at all, and until we do, we can't make tools that can produce that quality, only cheap imitations that produce a fraction of that quality.

        • I largely agree with what you're saying.

          I don't think the key reason was driverless cars. It was Big Data. For years, Google, Facebook et al sold ads, and themselves, on the premise that they had all that data that allowed them to do all sort of magical things. Soon enough Big Data was the thing and everybody was busy collecting data. And failing to get anything in return. Surprisingly enough, the difficulty was not in collecting data, but in extracting knowledge from it. The answer was AI. That's how we ca

          • If you're designing an electronic sensor of some type, one of the big keys to it working correctly is the ability to filter out the noise data from the data you actually want; that's what you're describing about using some of this so-called 'AI' software with all the data that Big Data has been collecting.

            FWIW I bring up SDCs / 'driverless cars' very often when discussing this overall subject, because it's what I feel is the most ostensibly dangerous application of a 'technology' that is half-baked at be
        • I've talked to people who are in neuroscience. They tell me we really have no idea how a brain does what it does, and we're not likely to know for quite some time to come. I've also talked to people in software engineering. They tell me that the difference between the so-called 'AI' they keep trotting out today, and the so-called 'AI' they had 20 years ago, isn't much at all on the machine code level, it's just bigger, faster hardware for it to run on.

          m -hm =: the advances that are shown aren't real because actually A.I. isn't real yet itself - . . . what IS real is the absolute self-fulfilling prophecy if you can call it that in this case and to see what you desperately want and gaze in awe and fill in the gaps like some kind of non-visual gestalt where 3 dots make a face , the greatest liar of all is the brain and wants make things happen, perception is reality . I remember sam-reciter making me gaze and a full 180kb floppy disk with a bruce springst

      • by ceoyoyo ( 59147 )

        AI/ML research has a couple of Achilles heels that make it vulnerable to this sort of thing. There's a reliance on benchmark datasets that are known to be unrepresentative, and also quickly get very overfit. Researchers tend to share code and data quite freely, which means it's very easy for other researchers to simply use and tweak things without understanding them deeply. The publishing standard in the field is preprint servers and conferences, which contributes to very rapid publication potentially with

    • by Slugster ( 635830 ) on Saturday May 30, 2020 @02:24AM (#60124194)
      Well no shit guys, some of us knew this months ago. That's when I moved all my investments into the Blockchain Shaving Club for Men.
    • by serviscope_minor ( 664417 ) on Saturday May 30, 2020 @03:38AM (#60124274) Journal

      Right. Just like I thought.

      No, you're both right and wrong. You appear to think it's all hype, nonsense and marketing. It's only mostly hype nonsense and marketing, but there's a massive difference between almost all and all. The latter implies no advance, the former does not.

      Being a curmudgeon doesn't make you right, it's just Sturgeon's law in action.

      Big revelation!

      This is how research always work. In every conference either all or almost ll papers will rapidly sink into obscurity. People will try them, find they don't work and ignore them. The few that are genuine advances will get picked up and used. The trouble is no one knows beforehand which the genuine advances are.

      It's a pointless game to point out that with exactly the right bits of hindsight and some modern tweaks you could make an old method work well. If it was obvious you would have done it then. It's only obvious now after an additional 10 years of research.

      • by lgw ( 121541 )

        No, you're both right and wrong. You appear to think it's all hype, nonsense and marketing. It's only mostly hype nonsense and marketing, but there's a massive difference between almost all and all. The latter implies no advance, the former does not.

        Being a curmudgeon doesn't make you right, it's just Sturgeon's law in action.

        Big revelation!

        Very well put.

        With all advancement, 90% of ideas that look great on paper are shit, but you only know by trying. I just wish more people recognized this fact of life when it comes to ideas for social progress.

    • by hey! ( 33014 )

      Well, yes, and no. Yes, hype, in that progress in practical applications wasn't the result of some kind of advance in AI. But that doesn't mean useful advances weren't made in products, they were just made the old fashioned way: elbow grease.

      Making an app *good* is an exercise in tweaking. You maybe can get to minimally functional by studying users and imagining how they will use the system a priori, but that's only half way there. You have to get it into real users hands, observe how they actually use i

  • by Way Smarter Than You ( 6157664 ) on Saturday May 30, 2020 @01:38AM (#60124116)
    That's the kind of guy everyone hates. He'll be the first against the wall when the revolution comes.

    He's not only advancing his own career by telling the truth, he's ruining the careers of countless others who were successfully buzzword scamming heir way into lucrative careers, VC funds, and other goodies. And if his study becomes better known, he's going to kill the golden goose.

    When the revolution came, he was the first against the wall and shot.
  • by Joe2020 ( 6760092 ) on Saturday May 30, 2020 @01:59AM (#60124164)

    The researchers seem to have missed to make the most important connection... Learning doesn't only come with success, but with failure, too. So do AI and humans learn through both and failure can be a real advance. We may only fail to see the positive side in failure sometimes. Still, not having made the connection, are their findings another step forward.

    • by ceoyoyo ( 59147 )

      Trainining neural networks absolutely makes use of failure to learn. In fact, there are many papers showing exaclty what you've said: failures are far more useful than succeses.

  • Initial progress will be good, but then further progress will be impossible once the top of the tree is reached.

    Famous description of Symbolic AI decades ago. And nonsense now as it was then.

    We have been working on this problem for just 60 years. Half that time with very limited hardware. And we can now do some amazing things. Speech and vision, usable natural language translation, world champion at both go and, more amazingly, Jeopardy!.

    But still nowhere near as generally intelligent as man. Yet.

    Give

    • No.

      As far as any sort of actual general intelligence goes, nowhere near as generally intelligent as an actual cockroach.
    • by Megol ( 3135005 )

      Our computers now can't even compete with a mouse in tasks requiring actual intelligence given almost unlimited power and size. This when we are approaching the physical limits in packing density (with current technology at least).

      No, we have more computation power but we don't have the knowledge needed to make something truly intelligent. We need the knowledge what hardware is needed to do what task to get true artificial intelligence rather than mere pattern matching.

  • companies think they have can be solved with methods you learn in junior high like linear regression. Even in the places where deep learning is useful fancy new methods like Transformer are often still outpaced by traditional methods like RNNs with LSTMs in some cases.
  • The "if we can make it work, we'll be billionaires" attitude in conjunction with "we'll pretend it works until we make it work" scheme will only get you so far. Eventually you have to deliver, bit the hype got to critical mass and nobody has been able to actually deliver meaningful progress on virtually any front, so motherfuckers are just blowing smoke in service of "we're faking it until we have that big breakthrough" or incompetence or just hubris.
  • They are running into Sault's Law, which states that a thing cannot make an artifact as complex as itself. It is an asymptotic goal. They are running into the limit in many things probably.
    • by K. S. Kyosuke ( 729550 ) on Saturday May 30, 2020 @05:54AM (#60124486)

      a thing cannot make an artifact as complex as itself

      See? You keep telling children that babies are brought by storks, and *this* is what you get when they grow up!

    • That doesn't sound like a law or even an observation so much as something some dude made up.

    • This is an empirically false claim. We know that intelligence can arise from evolution. So complex thinking systems can arise from simple things.
      • Yes, intelligence arose over 4.5 billion years. And humans will make it artificially in a machine in how long? Never.
        • Life got to approximately that level of intelligence without that as any sort of end goal- evolution is not at all goal directed. That not already shows that your basic claim is false since it can happen. All the more so one should expect it to be able to happen when one has intelligent beings trying to direct it. As for the amount of time, that's an even worse comparison since that's the amount of time it took for it to arise to an intelligence far beyond the starting point of zero. Finally, we already k
          • Chess and Go are not general intelligence, g factor. They are games with relatively simple rules and amenable to computer programming and brute force solutions with enough computer power. And I challenge you to give a detailed definition of intelligence. What exactly is the goal? The term AI is really just a description of clever programming and not artificial intelligence at all. And "..we've had computers help make discoveries in math since the late 1990s." Sure, they are tools, like hammers. They help th
            • by dcw3 ( 649211 )

              "They are games with relatively simple rules and amenable to computer programming and brute force solutions with enough computer power. "

              Brute force plays a part, but it's not the overriding reason Chess and Go have been conquered. Heuristics have been programmed into chess gaming since the 70s. As for Go, there's not enough computing power on the planet to brute force all the possibilities...yet.

              • "Deep Blue, with its capability of evaluating 200 million positions per second, was the first and fastest computer to face a world chess champion. Today, in computer-chess research and matches of world-class players against computers, the focus of play has shifted to software chess programs, rather than using dedicated chess hardware. Modern chess programs like Houdini, Rybka, Deep Fritz or Deep Junior are more efficient than the programs during Deep Blue's era. In a November 2006 match between Deep Fritz

              • Brute force plays a part, but it's not the overriding reason Chess and Go have been conquered.

                Yes, it is. Limit the searching to a depth of two, or three (same as an average human) and you'll see the humans trounce it.

                When Chess was beaten, they were mining the moves down to impossible depths (100s per piece per move) to find a win state. If you think Brute Force isn't the biggest contributor to wins, then limit the depth and let the computer play on heuristics alone.

            • So, dcw3 has already addressed a lot of these issues in their reply, and your reply to them completely ignored the issue of heuristics and the fact that Go is very much not brute force. I won't focus on that but instead the other issues in your reply.

              I challenge you to give a detailed definition of intelligence.

              Certainly intelligence is hard to define. But you seem to be implying that having trouble coming up with a definition of intelligence is a problem only for the people who disagree with you and not a difficulty for you at all. If you prefer, we can taboo the ter

              • Humans invent things all the time from scratch that don't require being "programmed" by others. Users may have to be taught to use a tool, but that is not what we are talking about. The creating human is not just being programmed. Think about technology developed since the cave days. One of the problems of science fiction is machines magically developing a motivational structure all by themselves.

                You are a mathematician, but why are you a mathematician and not a poet? It is because you have created a behavi

                • I agree that motivation is a very difficult problem that we don't really have anything similar to for AI. When my little brother was about 4 years old, he was playing with his Lego pieces and lining them up in different rectangles. I asked him what he was doing, and he said that he had noticed that some number of pieces he could put in only one long rectangle that was one piece wide and very long, but others he could make into than one rectangle; for example for 6 legos he could make them into a rectangle

                  • My point is that it isn't "AI." It is just clever programming used as a tool by actual "I." Calling it AI is just PR. A self-driving car will be able to beat a human in a marathon, but it isn't an "athlete."
  • Define "AI" (Score:4, Insightful)

    by nagora ( 177841 ) on Saturday May 30, 2020 @05:54AM (#60124482)

    AI. Noun: a mixture of 60% pattern-matching and 40% marketing. A substance mainly used in the procurement of grants and contracts.

    • Except old mainframe based Walmart, who just did Data Cubes to work out what went with what. It used to be called statistical analysis and forcasting. Occasionally blockbusters came along - that everyone wanted. They sorted that too. Presently Corona has taken a lot of cash out of peoples pockets, and credit SHOULD be tightened. Hard to sell crap below the top market segment presently. Good to see AI BS exposed.
  • by Kjella ( 173770 ) on Saturday May 30, 2020 @06:47AM (#60124618) Homepage

    It's a struggle with neural nets to make them learn the actual distinguishing features rather than memorize the training data, this problem is called overfitting. If you have "friendly" data we have good techniques to keep this manageable. However these artifacts of the training are the source of problems with "adversarial" data, you can exploit them to make the algorithm make mis-predictions. If you look at this graph [wikimedia.org] you can image the zigzags of the green line being exploited while a less fit model like the black line can't be easily fooled.

    So what's the simplest way to avoid memorizing too much? Stopping when you've learned enough. Alternatively you could learn the green line then try to smoothen the bad twists. Does that work out mostly to the same thing? Yes, on some metrics. No, on others. There's a number of ways to create bad images and this paper doesn't cover all of them. It just shows that adversarial training is both learning and unlearning details and if you do too much unlearning you'll end up with the same/worse performance as stopping early.

  • Neural nets have become a crutch for people who can't do feature engineering. For smaller datasets simpler algorithms can do a better job with good feature engineering. Also neural nets used in small datasets is not ideal. To train a neural network a training set of about a billion data points is needed. Of course the accuracy will be terrible for small datasets. We often use neural nets for analyzing unstructured data because it's less clear what the features are. Vision and audio data for example.
    • by Tablizer ( 95088 )

      Indeed. I've made fairly good industry-specific document/message categorizors just by having weighted word and phrase matching, include negative weights. With the help of a domain expert(s), you can gradually tune it to be pretty good.

  • As loosely as a perfect sphere on a sinusoidal trajectory mimics a horse race. They are shit for a reason.

  • There was a lot of AI hype in the late 60s and the 70s. People would come up with all sorts of extravagant predictions, which did not pan out. The AI winter of the 80s and 90s ensued. And now the industry seems to be repeating the same pattern: another AI winter is nigh.
  • Comment removed based on user account deletion
  • AI at it's heart is still just fuzzy pattern recognition. You add more nodes and a faster ability for the node weights to adjust, you get better and more complex results. It is cool, but it is not a great leap in machine consciousness or thinking IMO. It is more like Moore's law applied to neural networks.
    • by Kjella ( 173770 )

      You add more nodes and a faster ability for the node weights to adjust, you get better and more complex results. It is cool, but it is not a great leap in machine consciousness or thinking IMO.

      There's so much about the brain we don't understand that it's not funny. But one thing we're positively certain of from raw electrical signals and brain tissue is that it has no CPU operating at gigahertz or megahertz or even kilohertz speeds. Instead it's a massive number of neurons (10^11) with a large number of connections (10*15 total) operating at <200 Hz. Figuring out how complex behavior can arise form simple building blocks is super interesting for trying to decipher/replicate the human mind.

  • AI has always progressed in fits and starts. We probably hit a wall on throwing computing power at the pattern-matching ability of neural nets, and further breakthroughs may require cognitive breakthroughs: logic and reasoning.

    I still expect some interesting domain-specific tweaking of neural nets together with human-guided heuristics, such as bunches of weighted IF statements.

  • Comment removed based on user account deletion

Without life, Biology itself would be impossible.

Working...