Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Education AI Google

Can Google Scholar Survive the AI Revolution? 44

An anonymous reader quotes a report from Nature: Google Scholar -- the largest and most comprehensive scholarly search engine -- turns 20 this week. Over its two decades, some researchers say, the tool has become one of the most important in science. But in recent years, competitors that use artificial intelligence (AI) to improve the search experience have emerged, as have others that allow users to download their data. The impact that Google Scholar -- which is owned by web giant Google in Mountain View, California -- has had on science is remarkable, says Jevin West, a computational social scientist at the University of Washington in Seattle who uses the database daily. But "if there was ever a moment when Google Scholar could be overthrown as the main search engine, it might be now, because of some of these new tools and some of the innovation that's happening in other places," West says.

Many of Google Scholar's advantages -- free access, breadth of information and sophisticated search options -- "are now being shared by other platforms," says Alberto Martin Martin, a bibliometrics researcher at the University of Granada in Spain. AI-powered chatbots such as ChatGPT and other tools that use large language models have become go-to applications for some scientists when it comes to searching, reviewing and summarizing the literature. And some researchers have swapped Google Scholar for them. "Up until recently, Google Scholar was my default search," says Aaron Tay, an academic librarian at Singapore Management University. It's still top of his list, but "recently, I started using other AI tools." Still, given Google Scholar's size and how deeply entrenched it is in the scientific community, "it would take a lot to dethrone," adds West. Anurag Acharya, co-founder of Google Scholar, at Google, says he welcomes all efforts to make scholarly information easier to find, understand and build on. "The more we can all do, the better it is for the advancement of science."
Acharya says Google Scholar uses AI to rank articles, suggest further search queries and recommend related articles. What Google Scholar does not yet provide are AI-generated summaries of search query results. According to Acharya, the company has yet to find "an effective solution" for summarizing conclusions from multiple papers in a brief manner that preserves all the important context.
This discussion has been archived. No new comments can be posted.

Can Google Scholar Survive the AI Revolution?

Comments Filter:
  • Seriously, give it a rest. The public sees spicy autocomplete for what it is and doesn't want it. Quit trying to bruteforce shit we already solved more efficiently.
    • ChatGPT has 200 million active users, defined as users using it at least once a week. https://www.reuters.com/technology/artificial-intelligence/openai-says-chatgpts-weekly-users-have-grown-200-million-2024-08-29/ [reuters.com] ChatGPT is one of the most popular of the new AI systems but it is very much not the only one. So yes, in fact, a lot of people want this. As for this being "spicy autocomplete," while the essential idea of a large language model has some resemblance to what an autocomplete does, it is far more th
      • Super interesting how a weird nerd jumped out in front of the bullet in defense of another useless Elon Musk vanity company that wasn't even mentioned.
        • by JoshuaZ ( 1134087 ) on Tuesday November 19, 2024 @08:02PM (#64958845) Homepage
          Sigh. This has nothing to do with Musk, and the fact that you think it does shows your own ignorance ChatGPT is made by OpenAI which Musk was involved in very early on and is now feuding with, with multiple lawsuits https://www.latimes.com/business/story/2024-11-19/musk-escalates-altman-legal-feud-casting-openai-as-monopolist [latimes.com]. Musk is an ass, but even if he weren't an ass, it wouldn't be relevant here. Heck, even if he owned ChatGPT it wouldn't be relevant. ChatGPT is a free, easy to access LLM. I'm lazy and not going to go with another one. (And the only one I regularly use is Magic School AI which isn't designed for this sort of thing.) But you could get identical behavior from any major LLM, including Claude, Gemini, and Llama. Now, do you want to respond to the actual substantial points here or just engage in weird ad hominem attacks which aren't even based on facts?
          • Re: (Score:2, Flamebait)

            Difficulty: All LLMs are equally bullshit and worthless.
            • Difficulty: All LLMs are equally bullshit and worthless.

              Well, that's an improvement over making weird ad hominem attacks that are based on simply untrue claims about who owns what companies. But this is only a marginal improvement. In particular, that's not replying to any of the points at hand. You claimed that the public doesn't want LLMs. I responded with actual data showing that a large number of people are using it. Note that even if it were true that "All LLMs are equally bullshit and worthless," it would not make your claim that the public doesn't want LL

              • Still missing the core part of this conversation, which is 1) all LLMs are worthless trash wasting everyone's time and energy, and 2) only weird nerds want to debate starngers on the internet. Be better.
                • Still missing the core part of this conversation, which is 1) all LLMs are worthless trash wasting everyone's time and energy, and 2) only weird nerds want to debate starngers on the internet. Be better.

                  1) Calling somethin the "core part of this conversation" doesn't mean you've established the claim you've made at all. Note also that if LLMs are "wasting everyone's time" then that's also in direct contradiction to your own prior claim that the public doesn't want LLM AIs. 2) Labeling other people "weird nerds" isn't an argument but just an ad hominem. It is also a particularly silly one to try on Slashdot of all places, which classically had the slogan "News for nerds, stuff that matters." You also seem t

                  • See, you're still trying to debate where there is no debate. Fuck off.
                    • See, you're still trying to debate where there is no debate. Fuck off.

                      You've spent time on Slashdot before arguing with people and giving detailed reasoning behind your thought processes. But here you've decided to just engage in vitriol and declaring that "there is no debate." It should occur to you that your reaction this way in this context is because unlike on those other issues, you don't have any evidence. Since you don't, by all means feel free to respond with more vitriol and insults, and have a good day.

                    • So not only are you still not fucking off, or understand that there's no debate to be had, you still engage in magical thinking.
      • Re: (Score:2, Insightful)

        by narcc ( 412956 )

        As for this being "spicy autocomplete," while the essential idea of a large language model has some resemblance to what an autocomplete does, it is far more than that.

        No, there isn't. You've been taken in by a parlor trick. Take a look at this video [youtube.com] of Electro, a mechanical man from 1938 that responds to voice commands. The demo is real, it's not a puppet show, but it's carefully orchestrated to give you the impression that a lot more is happening than is actually happening. Once you know the trick, you lose the magic. The same is true for LLMs. They really are just "spicy autocomplete".

        • So, the experiment I suggested is precisely to avoid many of the claims being made by people trying to make that sort of argument. There's almost certainly no essay out there discussing the three pieces in question, so there's no easy way for it to copy or plagiarize from its training data. And one can repeat this experiment yourself with any three works of your choice. So what is going on here is much more subtle than anything like Electro or a mere "parlor trick." LLMs are capable of sophisticated pattern
          • by narcc ( 412956 )

            Instead of playing with prompts, I recommend you actually learn something about the technology. It's not that mysterious.

            • So, this doesn't actually address the question which was specifically about what capabilities you predict or not.

              Instead of playing with prompts, I recommend you actually learn something about the technology. It's not that mysterious.

              No one here used the word "mysterious" but you. But since you've now introduced it, and implied that thinking it is "mysterious" is due to not knowing about the technology, let's discuss that. The underlying technology behind transformers is simple, with key aspects being a lack of recurrent units, and with a carefully chosen loss function which uses the negative of log-perplexity. While transfo

              • by narcc ( 412956 )

                That's a lot of nonsense to say that we don't know everything and so you believe in magic.

                Again, take some time to learn about how the technology actually works. You very obviously don't, as your posts makes embarrassingly clear. As I've said, once you do, you'll understand the trick and the magic will disappear. Then you can stop suggesting meaningless "experiments" that do little more than seek confirmation for your delusions.

                • That's a lot of nonsense to say that we don't know everything and so you believe in magic. Again, take some time to learn about how the technology actually works. You very obviously don't, as your posts makes embarrassingly clear. As I've said, once you do, you'll understand the trick and the magic will disappear. Then you can stop suggesting meaningless "experiments" that do little more than seek confirmation for your delusions.

                  I was hoping you'd have some sort of substantive comment here, but instead it seems like you are favoring insulting people, claiming they must be ignorant, and engaging in general handwaving. That simple mathematical ideas can give rise to complicated and hard to predict systems doesn't require "magic"- this is a very basic thing as the examples I gave should have illustrated if you bothered to actually go and read them.

                  • by narcc ( 412956 )

                    I was hoping you'd have some sort of substantive comment here

                    The problem here is that you're not even remotely qualified to discuss the topic, as evidenced by the nonsense that you've written. You also have some very confused ideas about how science works, as evidenced by your silly "experiment".

                    Take this one statement for example:

                    There's almost certainly no essay out there discussing the three pieces in question, so there's no easy way for it to copy or plagiarize from its training data

                    It's clear even from just this alone that you have some deeply confused ideas about how LLMs work. As for the "experiment", it's complete nonsense both technically and epistemically. Where would I even begin? What can I do other than te

                    • What I don't understand is why you possibly think that repeating yourself this way is either going to a) convince me of anything b) convince anyone else reading this thread. You haven't gone through the minimal effort to identify any reason why anything I've written is wrong. You've just asserted it is wrong and that I must be deeply ignorant to think what I think. That's not productive for persuading either me, or anyone else. But since you insist so much on making this somehow about *me* rather than anyt
                    • by narcc ( 412956 )

                      So there's a chance that maybe I know what I'm talking about.

                      A 100% chance, as it happens.

                      In fact, I have a PhD in mathematics.

                      Great, so you should be able to handle the math at least. Now go actually learn something instead of posting ignorant nonsense.

                      that should maybe suggest to you that your conclusion that this just demonstrates one's ignorance for running such an experiment might, just possibly, maybe, be flawed?

                      Sigh... One expert staying something stupid about your non-experiement doesn't make it better. My guess is that was trying to encourage you to keep learning, because you obviously have a very child-like understanding of the subject. I want you to think about why someone would call your nonsense "experiment" nonsense. What assumptions are you making?

                    • In fact, I have a PhD in mathematics.

                      Great, so you should be able to handle the math at least. Now go actually learn something instead of posting ignorant nonsense.

                      Again, repeating claiming other people are ignorant without taking a half a step to even attempt to illustrate what they are wrong about doesn't really do anything useful. I'm not an expert on LLMs, but I am familiar with the basic architecture, and nothing you've said has been even a vague attempt at a handwave to point to something specific aspect of how LLMs work that I'm apparently ignorant of. Hopefully you can see the problem with that.

                      that should maybe suggest to you that your conclusion that this just demonstrates one's ignorance for running such an experiment might, just possibly, maybe, be flawed?

                      Sigh... One expert staying something stupid about your non-experie

                    • by narcc ( 412956 )

                      Quotes too complicated for you, sparky?

                      it is unlikely that further conversation is going to be productive

                      Obviously. As I've explained to you already, you're not qualified for this "discussion".

                      Generally, when an expert in an area says something

                      Except when the expert is criticizing your nonsense, right? What a joke.

                      this will likely be my last response.

                      Do the world a favor and make that your last post ever. There's enough nonsense in this world without you piling things on.

        • by gweihir ( 88907 )

          As for this being "spicy autocomplete," while the essential idea of a large language model has some resemblance to what an autocomplete does, it is far more than that.

          No, there isn't. You've been taken in by a parlor trick. Take a look at this video [youtube.com] of Electro, a mechanical man from 1938 that responds to voice commands. The demo is real, it's not a puppet show, but it's carefully orchestrated to give you the impression that a lot more is happening than is actually happening. Once you know the trick, you lose the magic. The same is true for LLMs. They really are just "spicy autocomplete".

          Indeed. They are also, to a limited degree, somewhat better search, as you can describe things you do not know the terminology for. Usually you get the right terms in the answer and a general fuzzy idea what the thing is about and then can use conventional search to find out more. They can also create "better crap", i.e. low-quality texts jumbled together from different sources that sound good, but are written without any insight or understanding. Oh, and I found that if you have some slight non-standard bo

          • by narcc ( 412956 )

            Have you ever heard of Gell-Mann Amnesia [wiktionary.org]? That's the first thing that comes to mind when I hear people talk about AI and search. When I try using an LLM to ask about something I know little about, the response often sounds impressive. When I ask about something I know a lot about, it often sounds like gibberish. That said, if all your after are a few related terms that you couldn't call to mind, it'll work just fine, though I'd still question its utility over conventional search or a quality index.

            They can also create "better crap"

            Sure

            • by gweihir ( 88907 )

              Not so far. But it does seem to fit well. People generally (about 80%) trust well-sounding text or people that mince "golden words". The ones that do not (about 20%) are the independent thinkers and to a lesser degree those that are accessible to rational argument. Those numbers are apparently well-known in Sociology, although they tend to obscure in their writings that they are talking about most people being fucking dumb. Seems to make publishing a lot easier.

    • I had a talk two weeks ago with a colleague, who explained me they stopped using Google Scholar in favour of chatGPT. I said I prefer do do an exhaustive keyword search, say analyse the first 200 results; they replied that's exactly what they ask chatGPT to do for them. Apparently "The public" wants it.

      • by narcc ( 412956 )

        He's in for quite a surprise... I'd feel bad for him, but he's clearly been warned.

  • Can it survive? (Score:3, Insightful)

    by Kiliani ( 816330 ) on Tuesday November 19, 2024 @09:46PM (#64958955)

    Yes, probably.

    As long as you can perform intelligent, sophisticated searches and Google resists the urge to enshitify it, I would say yes. I do not use Google search otherwise, but Google Scholar is useful. Reminds me of the good old days when you could actually craft an intelligent search and the search engine would return what you asked for.

    Will it live forever? Of course not. Will it live until I retire? Likely yes. Once AI will adhere to my wishes, maybe it will die. But that seems a long way off.

    • by pjt33 ( 739471 )

      I use Google Scholar to look up papers by title and author: in other words, as a search engine. How would an LLM do that better?

      • by narcc ( 412956 )

        It will give you more results, summaries of key points, and a synthesis. That's certainly better, assuming you don't care that all of that is very likely to be complete nonsense.

        • by pjt33 ( 739471 )

          That's what the abstract is for, unless the authors of the paper are completely incompetent.

          • by narcc ( 412956 )

            You seem to have missed the important bit: "all of that is very likely to be complete nonsense" It will invent authors and papers, make up quotes, fabricate key points, and produce meaningless summaries.

            An LLM will give you results that look nice, but are unreliable. LLMs are worse than useless as a substitute for Google Scholar.

            • by pjt33 ( 739471 )

              Not at all. My point was that a service which provides summaries of questionable accuracy is completely pointless in a context in which the people who wrote the original material already always provide a summary of it.

              • by narcc ( 412956 )

                Where is the disagreement here? "worse than useless" vs "completely pointless"

                Of course, with the LLM, there is no guarantee that you're getting an inaccurate summary of a paper that actually exists.

                • by pjt33 ( 739471 )

                  The disagreement is mainly a matter of emphasis. You're emphasising the inaccuracy of the automatic summary, whereas I'm emphasising its superfluity given that every journal paper already has a summary.

                  • by narcc ( 412956 )

                    Can they really be superfluous when imaginary papers don't actually have abstracts?

  • I think somebody has not looked at what is actually going on. All too common these days. Clueless and disconnected from reality is the personal condition of most people.

  • The most likely cause of death of anything from Google is being cancelled by Google. Speculating anything else as external cause is a waste of time.

  • It's an amazing resource which they deserve a lot of credit for creating, and one which is unlikely to make them any money. Doing my MA in church history it led me to lots of interesting material which I wouldn't have found otherwise except with vastly more effort.

  • I have no idea what LLM would do in that situation. How would it fold in the results other than pointing out specific differences. If it tries munging them together, then it no better than a hallucination machine.

    • We don't know for sure because after built, LLMs operate like a blackbox. Probably the most common or popular opinion reflected in the training data. This will have the effect of excluding any 'minority reports' and drive a hive mind, group think style of science. With the elite being the ones who decide what data the LLMs will be trained on, and thus what answers they'll give. Max Plank set for the the idea that science progresses one funeral at a time. They'll have to rewrite it as science progresses in w
    • by narcc ( 412956 )

      Well, it is a hallucination machine. It operates on statistical relationships between tokens, not on facts and concepts. It has no mechanism by which it can carefully consider different possibilities and craft a reply. It just predicts next tokens, one at a time, using a constant-time deterministic process. That this works as well as it does is amazing, but it's not doing anything more than that. It can't.

  • Shitty scientists seek shortcuts by having crap AI summarize data that they don't want to spend the time researching. Fuck current gen AI. It's barely functional word jamming bullshit and I'm tired of every asshole (i.e. manager) in existence buying into the hype just because some salesperson posing as a AI Prophet has told them it's the future. If this shit is the future, the future is even bleaker than today, and that's fuckin' saying something.

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...