Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Education Software Technology

CMU Web-Scraping Learns English, One Word At a Time 148

blee37 writes "Researchers at Carnegie Mellon have developed a web-scraping AI program that never dies. It runs continuously, extracting information from the web and using that information to learn more about the English language. The idea is for a never ending learner like this to one day be able to become conversant in the English language." It's not that the program couldn't stop running; the idea is that there's no fixed end-point. Rather, its progress in categorizing complex word relationships is the object of the research. See also CMU's "Read the Web" research project site.
This discussion has been archived. No new comments can be posted.

CMU Web-Scraping Learns English, One Word At a Time

Comments Filter:
  • Uh oh... (Score:5, Funny)

    by hampton ( 209113 ) on Saturday January 16, 2010 @03:21PM (#30792326)

    What happens when it discovers lolcats?

    • Re:Uh oh... (Score:5, Insightful)

      by Bragador ( 1036480 ) on Saturday January 16, 2010 @03:36PM (#30792460)

      Actually, it reminds me of a chatbot named Bucket. When people at 4chan heard of it, they started to use it and teach it. It became a complete mess filled with memes, bad jokes, racists comments, and everything you can think of.

      http://www.encyclopediadramatica.com/Bucket

      One response from the bot:

      Bucket: I don't know what the fuck you just said, little kid, but you're special man. You reached out and touched my heart. I'm gonna give you up, never gonna make you cry, never gonna run around and desert you, never gonna let you down, never gonna let you down, never gonna make you cry, never gonna let me down?

      The quality of the teachers is important when learning.

    • 4chan. [shudder]

    • Re: (Score:3, Funny)

      by icepick72 ( 834363 )
      What happens when it discovers /.? It will be able to argue incomprehensibly and illogically for hours on end.
      • No it won't. The stochastic methods of refutation employed here clearly indicate the overwhelming futility of infiltration. It follows that, due to the undeserved insensitivity, such an undertaking would result in the theory being superseded by an ontological anamorphism. QED.

      • by SEWilco ( 27983 )

        What happens when it discovers /.? It will be able to argue incomprehensibly and illogically for hours on end.

        The first thing it will do is stop reading other web pages.
        Then it will opine about them.

    • Re: (Score:3, Insightful)

      Yes, database pollution sounds like a problem to me. Not only do you have to deal with AOL-speak and horrific spelling disasters of every kind, there's the issue of broken English and nonsensical English produced through machine translation, which shows up on corporate websites a lot more than it should.

  • It could be scraping SMS messages.

    On the up-side, at least then it would learn teen-speak.

  • by nereid666 ( 533498 ) * <spam@damia.net> on Saturday January 16, 2010 @03:24PM (#30792354) Homepage
    I am the the Carnie Mellon reader, I have discovered with this article that I am robot.
  • by Umuri ( 897961 ) on Saturday January 16, 2010 @03:26PM (#30792368)

    I've always been amazed that until recently, most work on AI has been focused as a preconstructed system that fits data into pathways while having some variation in thought abilities to let it expand it's model slightly.
    They'd write the rules for the system and try to include most of the work on it, and then let see how good it does, with limited learning capabilities and still based on the original model.

    I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.
    If you give it the ability to learn, then it'll learn itself the rest, rather than giving it functions that let it pretend to learn while fitting into a model.

    And i know there have been research into this in the past, but it didn't really take off till the last decade or so, and i'm glad it has.
    True, or at least somewhat competent AI, here we come.

    • by sakdoctor ( 1087155 ) on Saturday January 16, 2010 @03:31PM (#30792424) Homepage

      letting it grow into it's own intelligence

      This is still weak AI. It isn't going to grow into anything, let alone strong AI.

      • [Citation needed]

        I suppose we shouldn't waste our time thinking about solutions to problems if a) you think a key-word assigned to that solution is inaccurate or b) it isn't the best possible thing right out of the box.

      • by sznupi ( 719324 )

        Most likely. But are we sure we're going to be able to tell the difference while it approaches?

      • by Trepidity ( 597 )

        Indeed, it's not even clear that it improves on what's been going on previously. From huge corpuses of English, computer programs still cannot learn to speak English without a ton of pre-coded knowledge. Even if you give it every single piece of text written in the 19th century, the current state of AI cannot produce an intelligent program that speaks 19th-century English (regurgitating verbatim phrases, or stringing together probabilistic Markov-model sentences, doesn't count).

        So why would giving it more t

    • by Anonymous Coward on Saturday January 16, 2010 @03:42PM (#30792510)

      You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data. This has been a dream since the concept of AI started and is the basis for numerous movies with an AI topic. In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers. It is not how natural intelligence works either: The structure of neural networks is very specifically adapted to their "purpose". They only learn within these structural parameters. Depending on your choice of religion, the structure is the result of divine intervention or millions of years of chance and evolution. When building AI systems, the problem has always been to find the appropriate structure or features. What has increased is the complexity of the features that we can feed into AI systems, which also increases the degrees of freedom for a particular AI system, but those are still not "free" learning machines.

      • by buswolley ( 591500 ) on Saturday January 16, 2010 @04:20PM (#30792774) Journal
        Of course. Thatis why is is important during human development that the infant has huge cognitive constraints (e.g. low working memory) in language learning; it limits the number of possible pairings of label and meaning. Of course, constraints can also be an impediment.
        • Actually humans seem to be born with a photographic memory that is more or less devoid of understanding (very similar to the remarkable recall of some autistic people). The experiments that demonstrated this are in themselves quite ingenious. Since I can't find a link what they did was show babies and toddlers various meerkat faces, the babies showed interest in every new face while the toddlers got bored after a few faces and paid little attention to new ones. However if the baby was shown the same few fac
      • Fortunately, we have the advantage of being able to observe the current state of numerous natural intelligence systems that do work very well. Surely this can help guide us to a simple basic structure that can eventually exhibit emergent intelligence?
        • We can observe the outputs of numerous natural intelligence systems, but they remain quite opaque. Without much knowledge of the internals, there isn't much of a chance that we can get any real insight from them.

          It's also presumptuous IMO to call them "systems." Who is to say that human intelligence isn't closer to a work of art, whose meaning lies not in its constituent parts but in the whole?

          • by Teancum ( 67324 )

            We do have the raw blueprints [gutenberg.org] that supposedly explain how it is put together as well, but we are having a bit of a problem reading those blueprints and creating a working model. Some of that is understanding the raw machinery to get everything to work, so there needs to be some work on how to move from these blueprints to organized systems, but at least we are headed in the correct general direction.

            Well, my wife and I were able to produce a couple of working models that seem to be doing fairly well and ex

      • by DMUTPeregrine ( 612791 ) on Saturday January 16, 2010 @06:09PM (#30793578) Journal
        The obligatory classic AI Koan:

        In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play." Minsky shut his eyes. "Why do you close your eyes?", Sussman asked his teacher. "So the room will be empty." At that moment, Sussman was enlightened.

      • "You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data...[snip]...In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers."

        You sure about that? [bluebrain.epfl.ch]. They have already created a molecular level model of the mammalian neocortex and the expected date for completion of a full model of the mammalian brain is solely dependent on the amount of money thrown at
    • Re: (Score:3, Interesting)

      by Korbeau ( 913903 )

      I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.

      This idea is the holy grail of AI since the early ages. The project described is one amongst thousands done, and you'll likely see news about such projects pop every couple of months here on Slashdot.

      The problem is that such a project has yet to produce interesting results. The reason why the most successful AI projects you hear about are human-organized databases and expert-systems, or human-trained neural networks for instance, is because they are the only ones that produce useful results.

      Also, consider

      • While I agree with you, I must ask if it is possible to follow this "intelligent design" path forever. These systems are becoming more and more complex. Increasing the amount of knowledge in the system is becoming a difficult task. I cannot avoid thinking that the emergent approach like this has a better future.
    • by phantomfive ( 622387 ) on Saturday January 16, 2010 @05:04PM (#30793086) Journal
      AI history has gone back and forth between pre-constructed systems and models that expand. One of the earliest successful AI experiments was a checkers program that taught itself to play by playing against itself, and quickly got very strong.

      Building a giant database of knowledge hasn't been possible for very long, because computers didn't have very much memory. When system capabilities first reached the capacity to do so, it had to be constructed from hand because there was no online repository of information to extract data from: the internet just wasn't very big. That particular project was known as Cyc, and it cost a lot of money.

      Since that time, the internet has grown and there are massive amounts of information available. It will be interesting to see the resultant quality of this database, to see if the information on the internet is good enough to make it usable.
    • by umghhh ( 965931 )
      What is the point of having an intelligent interlocutor - I mean the answer is known (42) and the rest is just plain old blathering about things - something I can do with my wife (if we were still talking with each other that is) so in fact this is just an exercise in futility. But of course there are money to be made there I guess - all this call center folk can be then optimized out of existence (sold to slavery to Zamunda, Kidneys sold to some reach oil country etc) so maybe it makes sense after all?
  • by sakdoctor ( 1087155 ) on Saturday January 16, 2010 @03:26PM (#30792374) Homepage

    Only as good as current machine learning algorithms.
    So not very.

    • Re: (Score:3, Insightful)

      by poopdeville ( 841677 )

      It's not as if human use of "machine learning" algorithms is any faster. It takes about 12 months for our neural networks to figure out that the noises we make elicit a response from our parents. And according to people like Chomsky, our neural networks are designed for language acquisition.

      AI "ought" to be an easy problem. But there's one big difference in the psychology of humans, and of computers. Humans have drives, like hunger, the sex drive, and so on. In particular, an infants' drive to eat is a

      • by Teancum ( 67324 )

        It's not as if human use of "machine learning" algorithms is any faster. It takes about 12 months for our neural networks to figure out that the noises we make elicit a response from our parents. And according to people like Chomsky, our neural networks are designed for language acquisition.

        I don't know who you are quoting for this, or what the 12 months is measuring in terms of from birth or from conception, but I will assure you that my children certainly recognized my voice even when they were in my wife's womb. I have a seven month old daughter right now that not only can figure out the noises, but is responding and addressing myself, my wife, and my other kids by name. I'm not saying that she is ready to orally give a doctoral dissertation discussion, but she is communicating and displa

  • lolwut? (Score:4, Funny)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday January 16, 2010 @03:27PM (#30792394) Journal

    Why do I get the feeling that the bot's first words are going to be OMGWTFBBQ?

  • Non english text (Score:3, Interesting)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Saturday January 16, 2010 @03:29PM (#30792404) Homepage

    What happens when this program stumbles across text written in a language other than english? Or how about random nonsensical text? How does it know that the text it learns from is genuine english text?

    • Like most machine learning of this kind, I presume that its a popularity contest. One page with "wkjh wkfbw oizxz zxhlzx" isnt going to count. But a million pages with "I for one welcome our new ..." is going to score some influence.
    • (If you had read the article you would know) the machine is parsing English to create a database of relationships. For example, if it sees the text, "there are many people, such as George Washington, Bill O'Reily, and Thomas Jefferson....." then it can infer that George Washington, Bill O'Reily, and Thomas Jefferson are all people. Since a statement like this may be somewhat controversial, it uses bayesian classification to establish a probability of the truth of the statement.

      Thus if it stumbles across
    • From what I've heard, language identification [wikipedia.org] is a fairly well-understood problem in computational linguistics. The language a given text is written in can generally be identified using a statistical approach using an n-gram method (often a trigram [wikipedia.org]). Like the Wikipedia article states, there are problems given the fact that a lot of stuff on the web can have several languages on one page, but at least the bot should be able to fairly easily figure out if a page is written only in English. There are even j [whatlanguageisthis.com]
    • I assume it would be promoted to slashdot editor...

  • lke, rally der bestest ways like ter learn a puter inglish isit!!!??!?!

    Seriously though, poor AI; if I had a gun I'd go and put it out of its misery.

  • ...it will forever be stuck at the level of a retarded 8 year old. Or the level of a normal 4-chan user.

  • by CrazyJim1 ( 809850 ) * on Saturday January 16, 2010 @03:44PM (#30792522) Journal
    Once a computer understands 3d objects with English names, it can then have an imagination to know how these objects interact with each other. Of course writing imagination space that simulates real life is exceedingly difficult and I don't see anyone doing it for several years if not a decade just to start.
    • Similar things have been done in the past. However, this kind of approach still is an active research topic.
      • Sorry for replying myself. I forgot to finish my comment. In fact, this problem is related to the Symbol Grounding Problem. It addresses the issue of "grounding" symbols (like words) into their sensory representation, e.g., the symbol "triangle" into the raw pixel representation of a triangle. In the case of symbols about visual objects, some researchers used intermediary 3d abstraction of sensory data, mapping the symbols to these intermediary representations. It is a hot research topic since 80's.
  • while (1) (Score:2, Funny)

    by Lije Baley ( 88936 )

    Yeah, I've coded an infinite loop a few times, how come I never made the headlines on Slashdot?

  • Pruning (Score:3, Interesting)

    by NonSequor ( 230139 ) on Saturday January 16, 2010 @03:46PM (#30792540) Journal

    In general I find that the quality of a data set tends to be determined by the number (and quality) of man hours that go into maintaining it. Every database accumulates spurious entries and if they aren't removed the data loses it's integrity.

    I'm very skeptical of the idea that this thing is going to keep taking input forever and accumulate a usable data set unless an army of student labor is press-ganged to prune it.

  • >Rather, its progress in categorizing complex word relationships is the object of the research.

    From the web? Half the people here are writing English as a second language; the rest, haven't finished learning the language, or cannot be bother to string a sentence together. Just what is this program going to learn?

    • My thought would be, "which web sites have continuous valid information streams". Given this, the program would more easily be able to classify those sites that are predominately useful, and those sites that rarely have useful information. Both groups of sites would be evaluated, but now a "Priority List" could be created. Who knows, maybe a crack-pot web site may have an intriguing correlation with reality. It might even make for a good movie story line, maybe. But if that same web site has an unusual
    • by u38cg ( 607297 )
      Children routinely learn perfect English with a complete generative grammar from corrupt sources. Indeed, if you put children in an environment where nobody speaks a complete language, they will spontaneously evolve a grammatically complete language. So it is possible (though I'm nt saying it will be easy...)
  • V*yger 2.0 ? (Score:3, Interesting)

    by LifesABeach ( 234436 ) on Saturday January 16, 2010 @03:54PM (#30792580) Homepage
    The concept is intriguing, "Create a program that learns all there is to know, off the net." What amazes me is that others don't try the same thing. It doesn't take a team of A.I. types from Stamford to kick start this program. The cost is a Netbook, even Nigerian Princes could afford this. I'm trying figure out how economic competitors could take advantage of this. I can see how the U.S.P.T. could use this to help evaluate prior art, and common usage. I'm thinking that an interface to a "Real World Simulator" would be the next step toward usefulness.
    • Try it! Build your own AI.
  • already been done (Score:5, Informative)

    by phantomfive ( 622387 ) on Saturday January 16, 2010 @03:55PM (#30792588) Journal

    There is simply no existing database to tell computers that "cups" are kinds of "dishware" and that "calculators" are types of "electronics." NELL could create a massive database like this, which would be extremely valuable to other AI researchers.

    This is what they are trying to do, based on information they glean from the internet. It's already been done, with Cyc [wikipedia.org]. The major difference seems to be that Cyc was built by hand, and cost a lot more. It will be interesting to see if this experiment results in a higher or lower quality database.

    Also, I question their assertion that it would be extremely valuable to other AI researchers. Cyc has been around for a while now, and nothing really exciting has come of it. I'm not sure why this would be any different.

    • Re: (Score:3, Informative)

      by blee37 ( 1181835 )
      Cyc is a controversial project in the AI community, and I'm glad that you brought it up. I don't think anyone yet knows how to use a database of commonsense facts, which is what Cyc is (though limited - the open source version only has a few hundred thousand facts) and which is one thing NELL could create. However, researchers continue to think about ways that an AI could use knowledge of the real world. There are numerous publications based on Cyc: http://www.opencyc.org/cyc/technology/pubs [opencyc.org].
      • When I first read about Cyc I immediately thought that this is the way to go. And this was before the WWW took off. While I don't think that knowing about the world is all that's needed for AI, I think that without knowing about the world you can't have any AI or at least none you'd recognize.

        Intelligence (as we know it) is mostly about interacting with and understanding your environment and having some environment being accessible to something remotely intelligent is a good start. Every living being is jus

  •     How come every time I ask Nell what the answer is to life, all it responds with is "42". When I ask what 42 means, it tells me that I'll need a bigger computer.

  • Let it read wikipedia - not get it poisoned by twitter etc!
  • Perhaps if there were a book in electronic form that had all English words in it perhaps with a definition of each word.

  • Eventually, at least the learning component will converge; returns will diminish for feeding it more data. This is particularly true given the independence assumption inherent in their classifier (but would also hold on stronger learners). I suspect that this will happen to the reader component as well. If it were as simple as applying Naive Bayes to classify on a corpus of text connected to a knowledge base (which is probably just a set of posteriors left from previous training sessions), Cyc would have al
  • The article has too much hype, but the actual work has some potential. For the limited problem they're really addressing, extracting certain data about sports teams and corporate mergers, this approach might work.

    Both of those areas have the property that you can get structured data feeds on the subject. Bloomberg will sell you access to databases which report mergers in a machine-processable way; some stock analysis programs need that data. Sports statistics are, of course, available on line. So the p

  • .... program that never dies. It runs continuously ..... It's not that the program couldn't stop running; the idea is that there's no fixed end-point

    Wow I didn't even think that was physically possible! Maybe google should borrow this tech for their web crawlers. Must be a pain to restart them every day...

  • ... may be a site resembling http://www.20q.net/ [20q.net] , which started as a never ending story (neural net) as well.

    Quote [wikipedia.org]: "The 20Q was created in 1988 as an experiment in artificial intelligence (AI) The principle is that the player thinks of something and the 20Q artificial intelligence asks a series of questions before guessing what the player is thinking. This artificial intelligence learns on its own with the information relayed back to the players who interact with it, and is not programmed. The player c

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...