Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Education

Stanford's Proposal Over AI's 'Foundations' Creates Controversy (wired.com) 100

ellithligraw writes: Last month a Stanford research paper coauthored by dozens of Stanford researchers which terms some artificial intelligence models "foundations" is causing a debate over the future of AI. A new research facility is proposed at Stanford to study these so-called "models." Critics call these "foundations" will "mess up the discourse."
The debate centers on what Wired calls "colossal neural networks and oceans of data." Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter. "I think the term 'foundation' is horribly wrong," Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion. Malik acknowledged that one type of model identified by the Stanford researchers — large language models that can answer questions or generate text from a prompt — has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world. "These models are really castles in the air; they have no foundation whatsoever," Malik said. "The language we have in these models is not grounded, there is this fakeness, there is no real understanding...."

Subbarao Kambhampati, a professor at Arizona State University [says] there is no clear path from these models to more general forms of AI...

Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundation models reflects a bias toward investing in the data-centric approach to AI favored by industry... "There are all of these other adjacent, really important fields that are just starved for funding," she says. "Before we throw money into the cloud, I would like to see money going into other disciplines."

This discussion has been archived. No new comments can be posted.

Stanford's Proposal Over AI's 'Foundations' Creates Controversy

Comments Filter:
  • ISO listed AI radical but practical? *shrugs*
  • The languages used to create AI should be ISO standards? *shrugs" Sorry for the extra post. =)
    • by AleRunner ( 4556245 ) on Monday September 20, 2021 @08:11AM (#61813283)

      There's a recent result that the complexity of processing a single neuron is at least equivalent to a three layer neural network. To be honest we have no real idea what's going on inside there, how it maps to "intelligence" and whether we have understood the key biochemical processes involved at all. This means that most of current "AI research" is pure cargo culting. Are neural networks key to intelligence? Well, a slime mould has exactly no inter-cellular connections and yet shows intelligence. There is real science ongoing and there's nothing wrong with building an incorrect model and then seeing if it works - that's how we find things out. However the current test of a good AI researcher is that they should be declaring failure. Anyone talking about success just around the corner is a charlatan.

      The languages used to create AI should be ISO standards? *shrugs"

      Why bring this up? If you have no idea what you are doing, you can't create a reasonable ISO standard. The current languages (and neural network models) may have nothing to do with how you create AI. This is a matter of ongoing and open research where nobody has good guesses or understanding yet.

      • "Anyone talking about success just around the corner is a charlatan."

        Agreed, but honesty will get your project pulled, especially because every other AI project has success just around the corner.
        • agreed.
          but i have noticed some interesting projects that are a i based.
          they involve what an object is.
          like types of birds
          and plants
          i just wish one existed for medicare

      • Exactly. I think when someone really cracks AI, it will be an abstract symbolic model. You will be able to map all human brain functions onto this model, but it will also present other uncomfortable truths, such as slime molds or forest fungal root systems or whatever also being mapped to the same thing. We have to beware of of enshrining human centrism.

        • Same thing goes for anthropomorphism.
        • by HiThere ( 15173 )

          That depends on what you mean by "abstract symbolic model". With my meaning of the terms that wouldn't be anything intelligent, though I suppose if might be an oracle.

          I suppose you could have a definition of "symbolic" that's sufficiently abstract that that could be made to work, but that would be at the level where voltage level can be considered a symbol. It's not what is usually meant, even though it can be included. And in that case various sensors could feed "symbols" into the process.

          In my opinion,

          • But you are not creating intelligence on a computer you are coding elements of an intelligence potential into a finished form. Biologic definitions of intelligence as applied to computer intelligence cannot be satisfied mainly because you cannot reproduce the biologic elements to create intelligence or reverse engineer biologic intelligence to support any computer intelligence model.

      • Your making a categorical error here though.

        Sure, human and animal intelligence works that way. But that doesn't mean its the only way we can define intelligence. As you yourself note slime mould can build complex, adaptive and possibly even intelligent behaviors without anything resembling a "connectionist" style neuron system.

        What we ARE pretty sure, its the basic perceptron calculus of a "neuron" as a unit of calculation that takes a bunch of inputs, weights them (positively or negatively) and then activ

        • by HiThere ( 15173 )

          The only way we can currently define "intelligence" is from an external view. In that case we can say something like:
          Intelligence is the ability to respond to a particular state in a way likely to cause a transition to a more desired state.

          If we do that, then anything that can do that should be considered intelligent, and the higher the reliability of the successful transition, the more intelligent the entity. Of course, then we need to define "desired".

          • The only way we can currently define "intelligence" is from an external view.

            That is the only proper way to define it. If a system behaves intelligently, then it is intelligent. The internal mechanism is irrelevant.

            • by HiThere ( 15173 )

              I *think* your stance is too extreme. But I'm not sure. There might be a way to define intelligence based around introspection, that would lead to a slightly different set of entities being considered intelligent.

              Just consider that when only judging by results one could decide that a thermostat was intelligent. (Or at least that an air conditioning system was.) Perhaps it *should* be considered minimally intelligent. I've argued that way in that past. But I'm not sure that's the *only* way to define in

              • Intelligence is the ability to formulate an effective initial response to a novel situation.

                An air conditioner can't do that. It only handles the specific scenarios it was designed to handle.

                If an AC was faced with a novel situation, such as someone leaving a window open, it would fail to respond effectively.

                An AC is not intelligent because it fails to behave intelligently.

                • Intelligence is the ability to formulate an effective initial response to a novel situation.

                  An air conditioner can't do that. It only handles the specific scenarios it was designed to handle.

                  If an AC was faced with a novel situation, such as someone leaving a window open, it would fail to respond effectively.

                  An AC is not intelligent because it fails to behave intelligently.

                  I don't think that's a good or safe definition of what intelligence is but it's a very good definition of one of the attributes of intelligence. It's definitely a great statement of what's wrong with most deep-learning setups. Instead of trying to deal with the novel, what they do is try to eliminiate it by thinking of every possibility and covering them all. Which will be great until an alien spaceship appears above the road making all the cars under it disappear. An intelligent creature would immediate

          • The thing that gets complicated is we dont want to fall into the trap behavioralists made in the 1950s. They argued that because we can't quantify phenomenological states, we cant use internal states as part of a scientific inquiry. On the surface, that seems reasonable, except the behaviralists added one and one and got three, and deciided that internal states *dont matter*, only behaviors and therefore internal states arent even real. The end result was we reduced human behaviors to the dogs salivating at

            • by noodler ( 724788 )

              Whatever intelligence is , its ultimately a transfer function that takes the sum of senses as its input space and produces a set of states and behaviors as its output space.

              I think you word it too much as a straight input->output system.
              I'm pretty sure that some of the produced states are acting as inputs, even going through parts of the sensory processing and thus form pretty long informational loops inside the brain system.

          • Can't intelligence be classified in some way as the ability to keep state in a controlled, predictive and patterned way? Therefore all matter is intelligent?

            • Can't intelligence be classified in some way as the ability to keep state in a controlled, predictive and patterned way? Therefore all matter is intelligent?

              In some sense that's the "strong AI [wikipedia.org]" position taken to an extreme - some people claim that a thermostat is intelligent since it reacts to it's world. Just taken on it's own though it's not a very useful definition of "intelligent". If you could show that a rock has "experience" as some people believe, then it might be. If you just say that "the rock is intelligent because it is" then you need a different word for the characteristic of humans that allows them to understand and manipulate the world, predict

        • One can always start from the bottom-up [solarbotics.net] in approaching the problem.

        • Sure, human and animal intelligence works that way.

          I'm not sure you are answering my comment. I, in no way, said which way intelligence works, though perhaps you are suggesting something different?. In fact, to be specific, what I'm suggesting is that we a) don't know how human intelligence works and b) there's reasonable evidence that the perceptron model is inadequate.

          If what you want to say is that AI is no longer the search for Artificial General Intelligence and that we are no longer searching for the possibility of at least human level intelligence t

          • Again with the category errors. Your confusing what AGI is.

            AGI is a search for AI that can do what humans do. Its not the search for *how* humans do it. SOME researchers are doing that. Most are not. Hell for most of AI's research history we where not even looking at neural networks but symbolic reason via things like inductive engines.

            • by noodler ( 724788 )

              AGI is a search for AI that can do what humans do. Its not the search for *how* humans do it.

              Can you prove that the GPT3 algorithm you talked about above actually does 'what humans do'?

        • by noodler ( 724788 )

          GPT3 may just qualify as a slightly cooked AGI

          How so?
          It can produce words based on learned statistics.
          How is this a general skill? How is it anything near AGI?

          Besides this, AGI is a very badly defined and intrinsically anthropocentric term. I personally don't think human intelligence is particularly general in the first place. It is quite optimized for dealing with specific problems that were present in our evolutionary development, and the 'general' part is pretty limited and particular to humans.
          To me it's human hubris to call ourselves 'general' int

          • Besides this, AGI is a very badly defined and intrinsically anthropocentric term.

            Serious AI researchers have always been quite careful about this. Our problem is we don't really have a definition of "intelligence" properly, so we end up with tests like the Turing test, which are anthropocentric, but that's only because we don't know any better. As I said at the start of this - the researchers that admit that they don't know any better are doing real scientific experiments.

            To me it's human hubris to call ourselves 'general' intelligences. We are mostly 'specific' intelligences.
            So what kind of precedent does this make for an AGI?

            You definitely have some point here - humans are very much good at identifying threats and prey using their eyes. T

        • As you yourself note slime mould can build complex, adaptive and possibly even intelligent behaviors without anything resembling a "connectionist" style neuron system.

          That's not true at all.

          Our best current hypothesis is that the cytoskeletons of slime moulds form interconnected chemical information signaling networks that we'd hard pressed NOT to call connectionist architectures...

      • by ceoyoyo ( 59147 )

        You should read that paper. It's quite good.

        A cortical pyramidal neuron, one of the most complicated kind, seems to have a transfer function that's about equivalent to a fairly small three layer neural network. Other types of neurons have transfer functions that are much simpler.

        The result suggests that the brain's processing is "bunchier" than a run of the mill neural network, but it doesn't really say that things are super complicated. If anything it shows that a pretty simple artificial neural network is

      • Neural Network Models are older than the internet as are cognitive models, theories and predictive text is an old trick, to use an example I guess.
        All pointing to an established development history before any "official" breakthrough.
        And it's my opinion that "getting it wrong" is good for innovation, but creating utility languages without a proven standards goal to this objective has to be within reason as the other uses of the language always suffer.

      • "However the current test of a good AI researcher is that they should be declaring failure. Anyone talking about success just around the corner is a charlatan"

        Exactly.
        The unintended behavior of even YouTube /FB /Twitter algos is surprisingly negative and it ends up affecting a huge bunch of humans too and god knows what more effects down the line & over time. And this was basically just rudimentary AI (not even really AI except in the limited way of finding patterns in big data and fine tuning hundreds

      • There's a recent result that the complexity of processing a single neuron is at least equivalent to a three layer neural network.

        For a single, specific type of neuron, sure.

        However, it's been conclusively demonstrated (through functional replication) that the functioning of most "standard" neurons (for instance retinal neurons for edge detection) is virtually completely accounted for by the standard artificial neuron in a feedforward network.

        • There's a recent result that the complexity of processing a single neuron is at least equivalent to a three layer neural network.

          For a single, specific type of neuron, sure.

          Which just happens to be the most common one in the human brain. I wonder why?

          However, it's been conclusively demonstrated (through functional replication) that the functioning of most "standard" neurons (for instance retinal neurons for edge detection) is virtually completely accounted for by the standard artificial neuron in a feedforward network.

          Recent advances in neuroscience are great. I personally feel that we will soon either understand the basic function of the brain or have a clear statement of where the weird stuff is going on. Having said that, years of experience of believing that "understanding of intelligence is just around the corner" say that I am almost certainly wrong.

  • by Anonymous Coward
  • Huh? (Score:5, Insightful)

    by oneiros27 ( 46144 ) on Monday September 20, 2021 @06:54AM (#61813119) Homepage

    Critics call these "foundations" will "mess up the discourse."

    This doesn't quite parse as an English sentence.

    Does this mean that we've got AI trying to post articles about itself? If so, should we be proud or worried?

    • Never ascribe to AI what can be adequately explained by incompetent "Editors" who don't.

    • Re:Huh? (Score:5, Funny)

      by Entrope ( 68843 ) on Monday September 20, 2021 @07:20AM (#61813161) Homepage

      Are you worried that this means that we've got AI trying to post articles about itself?

      Can you explain why we should be proud or worried?

      Do you think coming here will help this parse as an English sentence?

      • Here's the thing, we've got an academic debate raging as to whether or not a class of neural-network models used for language processing is a "foundation" or not. Well fuck people, someone just ask the damned AI already. I promise you'll get a definite answer very quickly and I can get back to my day drinking.
      • Nice try, Eliza.

        You're still not going to pass the Turing test.

        • Does it bother you that Eliza can pass the Turing Test?

          • Not even close. Only when the topics and conversational complexity are reduced to grade school levels can it speak coherently and to the point and even then can't carry on a conversation of any length. The fingerprints of programmers mucking with the responses to abort those are all over it.
            • Not even close. Only when the topics and conversational complexity are reduced to grade school levels can it speak coherently and to the point and even then can't carry on a conversation of any length. The fingerprints of programmers mucking with the responses to abort those are all over it.

              Can you elaborate on that ?

          • by HiThere ( 15173 )

            Eliza only passed a very simplified approximation of the Turing test. The guy at the other end of the teletype didn't even consider that Eliza might be a program on a computer. (But he was convinced that it was a human, and he was going to try to get her fired. )
            (Sorry, I couldn't find a link the the reference I was remembering.)

            • Eliza only passed a very simplified approximation of the Turing test. The guy at the other end of the teletype didn't even consider that Eliza might be a program on a computer. (But he was convinced that it was a human, and he was going to try to get her fired. )
              (Sorry, I couldn't find a link the the reference I was remembering.)

              Do you say sorry, you couldn't find a link the the reference you was remembering for some special reason ?

              • by HiThere ( 15173 )

                Yes. I was talking about one specific encounter in the very early days of Eliza, and I'm not absolutely certain that I remember all the details of the report I read correctly...or that the report was completely correct.

                • How long have you been not absolutely certain that you remember all the details of the report you read correctly... or that the report were completely correct?
                  • How long have you been not absolutely certain that you remember all the details of the report you read correctly... or that the report were completely correct?

                    That is interesting. Please continue. :-P

                • What makes you think that you're not absolutely certain that you remember all the details of the report you read correctly...or that the report was completely correct?

                  (And that's three in a row :-P)

      • "Are you worried that this means that we've got AI trying to post articles about itself?"

        So that's the meaning of Singularity !

    • Re:Huh? (Score:4, Interesting)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday September 20, 2021 @07:35AM (#61813201) Homepage Journal

      I presume it means we've got Chinese editors masquerading as Americans. That kind of error is ubiquitous in Chinese text translated to English. But maybe it's just the editor not editing (because they don't) and the author being a bot, or a troll farm, or a dope. If you check their posting history you can see they sometimes but don't always sign their posts.

    • It is not like the argument is about semantic distinctions and clarity... oh, wait...
    • Explanation (Score:5, Interesting)

      by Okian Warrior ( 537106 ) on Monday September 20, 2021 @09:19AM (#61813525) Homepage Journal

      AI as a field of study is, in a sense, misguided.

      On the one hand, there are programs and solutions with great practical value, such as the ones mentioned in the OP. Most of the research is here, and most of the research is aimed at providing even more solutions with great practical value.

      You could term these solutions "weak AI", as opposed to "strong AI". It's closely analogous to writing a chess-playing program: a programmer learns to play chess, then while playing he turns his inner-eye to note his brain working, thinks "when I play chess, I do *these* actions", writes those actions down in the form of a program, and calls it AI.

      The human brain can learn chess, checkers, go, go moku, poker, and any of hundreds of different games, and so far as anyone can tell, the human brain has no circuitry specific to chess. The brain is running a more general algorithm that can apply to *any* game and any intelligence problem.

      This algorithm would be termed "strong AI" or "AGI" ("general"). Very little work is being done in this area, no one seems to have any idea how it might be accomplished.

      One problem with AGI is that there is no constructive definition - we don't really know what it *is*. Modern AI solutions mimic the brain's structure to a small extent (recurrent in neural networks), but the overall structure and design is simply "something people came up with" that was not informed by how the human brain works. We know that intelligence has look-ahead and planning (chess playing program), and reasoning that looks a lot like formal logic with a layer of probability (inference engines), but beyond that little is known.

      To the OP: Taking pieces of the current field of study as a foundation may be counterproductive and might hold the field back for decades if the terms and structures become canon.

      As an example, note that the current mix of recurrent neural nets calculated by a huge tensor matrix is still based on the hidden layer markov model, with input nodes, hidden layers, and output nodes. The brain absolutely is *not* a pipeline structure with a definite input on one side and output on the other, so at the very least current solutions differ from the brain in this basic structure.

      As a second example, the brain uses roughly 10x more feedback than feed forward in its neural pathways. No one knows why this is or what it is doing or why it is needed, the concepts were too complex to deal with at the start of the field of study (decades ago, when computers were less powerful), so this feature was simply ignored. The brain uses feedback as part of its processing, not specifically learning, and we don't know what this processing is. Most information processing in modern systems is feed forward.

      So publishing a paper laying down the fundamentals of AI may not be good at this moment in time, it will guide the content of textbooks and learning for decades to come and might stall the field.

      For many reasons, including the examples given.

      • I see reams and reams of concern from people trying to disabuse others of the notion that general AI has been solved. But who are these "others," really? I don't think they exist.
      • so far as anyone can tell, the human brain has no circuitry specific to chess.

        You mean other than the synapses being reinforced with use and the new dendrites being grown to accommodate chess-like thought? Other than that?

      • Modern AI solutions mimic the brain's structure to a small extent

        How so? Please be specific.

        • by HiThere ( 15173 )

          I'm going to take you seriously.

          Look up "perceptron". It's a *VERY* simplified neural model. We don't really know how a neuron operates, we know parts of how it operates, and the "artificial neurons" only include a small part of what we know. Is it the important part? We don't know. We do know that it's at least *part* of the important part. There's good evidence that it's not all of the important part, but that evidence isn't totally convincing...and even if correct doesn't say which additional parts

        • Modern AI solutions mimic the brain's structure to a small extent

          How so? Please be specific.

          Recurrency in the current model emulates the layered cortical column structure of the brain, and ANNs themselves emulate the neuron/synapse interface. Also, the prefrontal cortex is apparently tasked with thinking ahead in time, which is analogous to the look-ahead found in chess-playing-like programs. Also, LSTM networks appear to mimic the psychological effect known as "Priming".

          You mean other than the synapses being reinforced with use and the new dendrites being grown to accommodate chess-like thought? Other than that?

          You're being trollish.

          The synapse growth you mention stems from a neuronal system that is universal, and not specific to chess.

      • Reading your interesting reply a quote from Blade Runner came to mind; How can it not know what it is? That quote is taken a bit out of context but its interesting to ponder how something that is self aware is trying to understand how it is that way.
      • Very little work is being done in this area, no one seems to have any idea how it might be accomplished.

        DeepMind has created a program called MuZero that can win chess, go, and other games, without even being programmed with the rules of those games. I guess someone does seem to have an idea how it might be accomplished!

      • As a second example, the brain uses roughly 10x more feedback than feed forward in its neural pathways.

        Are you talking about the glial cells? There are 10x as MANY glial cells as neurons, but that doesn't mean the brain uses "10x more feedback than feedforward". The main role of the glial cells is as a scaffolding support structure, which is why they have so many more.

        No one knows why this is or what it is doing or why it is needed, the concepts were too complex to deal with at the start of the field of study (decades ago, when computers were less powerful), so this feature was simply ignored.

        What are you talking about? We've known this for a while. The slow chemical feedback through the glial cells is the biological equivalent of backpropagation - it's the errors feeding back. We've also known that there are 10x as many glial cells

    • Critics call these "foundations" will "mess up the discourse."

      This doesn't quite parse as an English sentence.

      Does this mean that we've got AI trying to post articles about itself?

      No. Basic grammar is one of the easiest things for an automatic system to do correctly. My grammar checker warns me about the above sentence.

      This was a very human mistake.

    • Critics call these "foundations" will "mess up the discourse."

      This doesn't quite parse as an English sentence.

      Oh no! They messed up the discourse through not messing up the discourse!

  • It seems like they are missing the obvious motivations behind this path in AI: at advance business applications of AI. Seriously, Big Data is a thing and they likely got some "donations" that "suggested" they go this route because it best fits their business model. It's all too common that choices are made in education that are geared not toward educating but rather toward enabling businesses. Just look at the shift toward teaching Java instead of C++. C++ offers many more educational concepts (especial

    • by ceoyoyo ( 59147 )

      I think that's exactly what Stanford is NOT missing. If you want to develop quick, practical applications with minimal effort, your best bet is to take some ginormous model that Google or Facebook has helpfully trained for you, and apply it to the problem that you carefully selected to be compatible with an existing model in the first place. It's a great way to generate papers, headlines, business interest, etc.

      If you want to drive discovery, you don't want to take some "fundamentals" that someone else has

  • by JBMcB ( 73720 ) on Monday September 20, 2021 @07:59AM (#61813259)

    Here is the underlying problem of the conversation:

    General Public's Understanding of AI: The Terminator

    Actual Implementation of AI: An image classification algorithm that can usually tell the difference between a hot dog and male genitalia.

    • Here is the underlying problem of the conversation:

      General Public's Understanding of AI: The Terminator

      Actual Implementation of AI: An image classification algorithm that can usually tell the difference between a hot dog and male genitalia.

      I think you meant: usually can't.

    • Re: (Score:1, Funny)

      Ah yes, I believe Alan Turing was rather good at that test.

      • Me dad was quite interested in the underlying problem of human consciousness, yes.

        ...

        Also in male genitalia.

        ...

        I suppose that was the joke.

        ...

        HALT

    • by AmiMoJo ( 196126 )

      Yes, they are saying that specialist AIs like we have now for creating texts or understanding speech are too limited to produce AI like laypeople think of it, and that creates problems when people assume AI is more complex than it is.

      An additional problem is the large financial rewards for working on these limited AIs, and much more limited funding available for AIs that may one day reach general intelligence.

      Interestingly this was the subject of the memo that Google's ex-AI team leader wrote, trying to war

      • The biggest problem is that developers co-opted a well defined word as a piece of jargon due to hubris and now want everyone else to accommodate their definition.

        None of this issue would exist if they hadn't anthropomorphized their programs and over hyped what was happening with their word/speech processors.
    • Sometimes though, these debates about AI sound like, "How many A.I. can dance on the head of a pin?"
      • by HiThere ( 15173 )

        That was actually a serious question, however.
        At that time, matter was usually considered to be continuous, and the question was really "Are angels material entities?". It seems foolish to me, but many of the paintings of Adam and Eve obscure their mid sections not to hide their genitals, but rather to hide whether they had a navel or not. It was a serious question whether they were or were not created with navels.

        Moving forwards to the present, "How many A.I. can dance on the head of a pin?" isn't consid

  • They meant to say "I, Robot".

    • by HiThere ( 15173 )

      Nahh. We still don't have positronic brains developed. Susan Calvin probably hasn't been born yet.

  • Wired got their title slightly incorrect. See the Forbin project.
  • ...it's automation in general what requires to rethink foundations. Society has not adapted yet to the concept that mental processes which used to involve humans can now be performed unsupervised without human intervention, for decades. Last time something similar happened (for physical processes), we got the Industrial Revolution that changed the shape (and size!) of the human race.

    People who complained about bureaucracy in the Eastern Block have yet to see what the automatic bureau-crazy has in store for

    • When the entire official body of law is been transformed into computer code

      Never happen. There's way too many contradictory laws. They'd have to change the laws first. Time, if nothing else.

      • When the entire official body of law is been transformed into computer code

        Never happen. There's way too many contradictory laws. They'd have to change the laws first. Time, if nothing else.

        Internal essential contradictions don't prevent software from being built. Only from running to completion uninterrupted.

        When those contradictions arise, the software throws an exception. Which is then automatically processed without supervision, handled by a different subsystem, which may or may not be devoid of contradictions or able to provide a satisfactory response.

        However, my point was not that current laws will be ported to code. It is that whatever software platform is built, its code will become th

  • What's the problem with the term foundation?
    • by HiThere ( 15173 )

      The problem with the term "foundation", is that they're specifying things that may well turn out to be transitional artifacts as if they were basic.

      Additionally, even if they were permanent feature of a particular implementation, they aren't the basic concepts. So "foundation" is the wrong term, even if the specified items become permanent methods of implementation.

      • While I'm thankful you explained it, I guess I'm just too ignorant or stupid to understand what it means. I guess this one is pretty deep in the AI specific lingo weeds?
        • by HiThere ( 15173 )

          It has nothing to do with AI lingo. It's the meaning of foundation, i.e. the thing that a pillar or building rests upon. The foundation is often not a part of the building, but the building depends upon the soundness of the foundation for its own soundness.

          This appears to be like saying that arithmetic depends on using an abacus.

  • by edi_guy ( 2225738 ) on Monday September 20, 2021 @10:41AM (#61813879)

    And put it in a safe place, like stars end.

    Stretching the analogy it could be that Zuck is the Mule

  • Emily M. Bender, a professor...

    I sure like to think that 'M.' stands for mind. :)

  • Why can't we just always fucking put periods in the term, like A.I., making it unambigious, regardless of font?

    With sufficient context it's often obvious, but in a headline, it often is not.

  • To declare the current direction of AI to be "foundational" is to ascribe too much importance to it. As other posters have said, we have made progress in classifying objects, but obviously intelligence is so much more than that. It may be that we're heading down a blind alley, and need to step back and take another approach, or many other approaches. This just sounds like researchers patting themselves on the back for being so clever, instead of questioning everything like a competent scientist should do.

  • It is not obvious to me that artificial intelligence should be modeled on biological intelligence. Evolutional biology led to flying using flapping wings. Hardly relevant to airplane design.

It's been a business doing pleasure with you.

Working...