Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Open Source AI Cellphones Cloud Google Apple

Ask Slashdot: Who's Building The Open Source Version of Siri? (upon2020.com) 186

We're moving to a world of voice interactions processed by AI. Now Long-time Slashdot reader jernst asks, "Will we ever be able to do that without going through somebody's proprietary silo like Amazon's or Apple's?" A decade ago, we in the free and open-source community could build our own versions of pretty much any proprietary software system out there, and we did... But is this still true...? Where are the free and/or open-source versions of Siri, Alexa and so forth?

The trouble, of course, is not so much the code, but in the training. The best speech recognition code isn't going to be competitive unless it has been trained with about as many millions of hours of example speech as the closed engines from Apple, Google and so forth have been. How can we do that? The same problem exists with AI. There's plenty of open-source AI code, but how good is it unless it gets training and retraining with gigantic data sets?

And even with that data, Siri gets trained with a massive farm of GPUs running 24/7 -- but how can the open source community replicate that? "Who has a plan, and where can I sign up to it?" asks jernst. So leave your best answers in the comments. Who's building the open source version of Siri?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Who's Building The Open Source Version of Siri?

Comments Filter:
  • by Anonymous Coward

    There is no code to copy and build upon. Without copies, there is no copyright law to enforce openness. Consider a world without the GPL. Now consider it without piracy as well. Welcome to SaaS.

    • There is AGPL. It was made with SaaS in mind.

      • There is AGPL. It was made with SaaS in mind.

        But when you have dependencies on non-free services, a reliance on machine learning that requires big data or any kind of infrastructure that you can't easily replicate it means you aren't going to be doing the computing on your own computer. AGPL is good but often in the context of SaaS it isn't practical.

  • Sirius (Score:5, Informative)

    by Anonymous Coward on Sunday September 25, 2016 @12:54PM (#52958131)

    Sirius (Ubuntu only I believe):
    http://sirius.clarity-lab.org/sirius/

  • by Anonymous Coward on Sunday September 25, 2016 @12:55PM (#52958137)

    When you talk about the 'massive farm of GPUs' running 24/7 you ignore the fact that, because it is proprietary they are missing out on the potential compute resources out there.
    How many people have run SETI@home, or gene folding efforts. We just need someone insightful and ingenious to find a way to deal with machine learning in an 'offline' way, and be able to present the user interface in a quick fashion.

    It would have to start out very dumb, but with some great key algorithms I expect an open source option could move a lot faster than anything out there in this regard.

    • One of the fascinating things being tried right now is building decentralized applications on top of the cryptocurrency platforms. Ethereum, MaidSafe/Safecoin, Lisk, and others. The data is distributed in a distributed hash table (DHT). Users contribute CPU cycles, RAM usage, and disk space to the network in return for tiny portions of the respective digital currency. They're trying to build distributed autonomous organizations (DAOs), distributed alternatives to Twitter and Facebook, distributed altern
    • by Etcetera ( 14711 )

      We just need someone insightful and ingenious to find a way to deal with machine learning in an 'offline' way, and be able to present the user interface in a quick fashion.

      It would have to start out very dumb, but with some great key algorithms I expect an open source option could move a lot faster than anything out there in this regard.

      Precisely. I don't get what the misunderstanding is here among the Slashdot crowd.

      Natural Language Processing is neat tech. Mechanics of speech recognition is neat tech. Integration of the two via a dispatch engine and scriptlets to go off an search Google, run a command, or whatever else one can script, is neat tech.

      I'd use this ALL THE TIME if the data didn't leave my network, and I'm sure I'm not alone.

      We can't duplicate a zillion far off machines running a Google-scale cluster, but it's hard to see why

      • It's harder than you think. Those older systems sucked, and couldn't handle natural language queries. The issue is not processing power, it's having a large enough volume of training material and mimicking how the brain fills in gaps.

        Training material isn't just a case of gathering samples. When the machine makes a mistake, it needs to understand why. The collection needs careful curation and sorting to be useful. Such databases are extremely valuable, and historically with OS projects they often started with a donation from a commercial body rather than from scratch.

        Mimicking the brain is also extremely hard. Often people don't hear things very clearly or in full, due to environmental noise, poor pronunciation and the like. To compensate the brain fills in the gaps or makes assumptions. People have been trying to program those assumptions into computers since the 1980s. Again, a database of that knowledge will be vast and valuable. Either you throw massive human resources at building it, or you crawl the web and look at trillions of search queries like Google does.

        That's also why they need a cloud service to do this. The database is vast and proprietary, and querying it far from a trivial SQL command.

        It's not just a programming or AI training problem, which is why no-one is doing it. The closest thing the OS world has is probably Open Street Map, but creating that data set was far less laborious and uninteresting than training a computer to have some common sense will be.

        • It also ignores the fact that people mishear and misunderstand each other. All. The. Time. Those gaps we fill in? Often erroneous. People actually expect more from computers than from other people: nearly perfect listening AND comprehension.

      • Apple's dictation software (PlainTalk) was running on System 7.1 Pro 20 years ago, using local hardware 100's of times slower than what I have in my pocket. Basic NLP code was running on the Newton, which was 1000x slower and still managed to handle the basics on top of the handwriting recognition. "Speakable Items" let me run user-writable AppleScripts to automate tasks and was just missing dictatable variable names.

        I helped Apple wreck a nice beach.

  • Honestly, the only way that I see this happening is if Google decides to make their AI interface open source. Which they might do as a public service -- but we're still playing in Google's sandbox.

    Unless there's some way to get geeks to contribute their unused CPU cycles, like what SETI was doing...
    • Search for aliens -- OoooOOOooooh!
      Sex robot -- Giggity!
      Create a digital assistant -- Meh.

      Siri and Google Now aren't sexy. Maybe what's needed is a chatty digital alien sexbot that happens to double as an assistant. Slide in the useful features on the sly, like hiding dog medicine in a piece of cheese.

    • Honestly, the only way that I see this happening is if Google decides to make their AI interface open source. Which they might do as a public service -- but we're still playing in Google's sandbox.

      You mean like this [tensorflow.org]?

  • Possible (Score:4, Interesting)

    by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 25, 2016 @12:57PM (#52958145) Homepage Journal

    First, I'm sure there's lots of Open Source being used in Google's implementation - just not where we can see.

    There is a speech recognizer from CMU that might be a good starting point. I haven't heard about plain-language software, though. There is additional rocket science to be done. Not insurmountable given things we've already done.

    Training with millions of people? Actually, that's the part that community development is good at.

  • Mycroft, obviously. (Score:4, Informative)

    by JudeanPeople'sFront ( 729601 ) on Sunday September 25, 2016 @12:58PM (#52958147)
    OK, you might not listen to the Linux Action Show or similar podcasts, but come on... google "open source AI" before asking.
  • Do you not realize that Siri must utilize a significant backend resource at the other end of a data connection to be effective, and that the backend requires substantial resources to operate and maintain? Siri is not some standalone app you download and forget. An open source equivalent would not be free, and would require a Kickstart and guaranteed subscriptions to be feasible. I don't think the world is quite ready for such a thing yet, not in a country populated by people willing to vote for either Tr

  • Jasper Project (Score:2, Informative)

    by Anonymous Coward

    It is in development..

    http://jasperproject.github.io/documentation/

    Not affiliated with the project.. saw it sometime ago.. decided to wait till it further matures...

  • AFAIK Google's isn't open source, but I don't think anyone is paying directly for it. I recently bought a $28US phone that came with the android OS. This came installed on the device (it's built into maps), and it works very very well. So I think the question is worthless to answer.

    Sure, if there's an open source option, then the world can rest assured to be able to tinker with it themselves and that. And yes, Google could pull the plug on it. But for some reason, I feel that Google would just relea
    • by DavidRavenMoon ( 515513 ) on Sunday September 25, 2016 @01:41PM (#52958389) Homepage
      You pay for Android with allowing Google to data mine your info. This is why they wanted to be on mobile phones in the first place. This is why they offer "free" services like Gmail and photos. Their software reads all your emails. Then they target ads to you. Google Now is another way they can get advertising info from you. That's how Google makes their income, and why they can give Android away for "free" to phone makers. It's not about being open source. It's about advertising revenue.
      • I have yet to receive an ad from any app, or email in the last 5 years. And if anyone wants to use my data for profit, good luck, I'm poor on purpose.
  • by Theovon ( 109752 ) on Sunday September 25, 2016 @01:09PM (#52958229)

    There are a few application areas that are specialized and difficult enough that it they may not be doable within the Free Software paradigm. Richard Stallman himself, for instance, was not able to explain to me how you could get the right specialized engineers together to develop a free equivalent to Synopsys design compiler. Enthusiasts in this area don’t tend to be interested in writing software as a hobby, so you’d have to hire engineers, which means you have to pay for all the development.

    With automatic speech recognition, it’s not just an AI problem. You need massive labeled datasets that cost money to acquire, and the experts who really know this stuff are moving to on to their next research project. So how are you going to get engineers to learn and implement the esoteric techniques used here? You’d have to pay them. Most people who would be interested in writing free software to do this just don’t know the subject area well enough.

    • by AmiMoJo ( 196126 )

      I imagine Stallman would point out that you could in fact pay engineers to work on it, but still release it under the GPL. Like Google does with a lot of its software, for example. Such specialist software is likely to have significant support requirements, which can be charged for, or users of the software could simply pay people to add the new features they want.

  • Ones that even beat the proprietary competitors too, see http://tests.stockfishchess.or... [stockfishchess.org]. This is not to mention efforts like folding@home and similar. Of course there is still the problem of having large training data sets.
  • SETI@home is old as hell, so the idea of "open source" render farms is at least as old. Those "massive farms of GPUs running 24/7" don't scare me at all. In fact, both Siri and Google's voice recognition kinda suck. When they try to control us with this, or it is revealed that they send all their data directly to the government, I suppose we will have an incentive to act. Otherwise, wake me up when they do something interesting and new.
  • Mozilla (Score:4, Interesting)

    by stakman ( 662655 ) on Sunday September 25, 2016 @01:22PM (#52958317)
    The Mozilla project Vaani is intended to fill exactly this niche. https://wiki.mozilla.org/Vaani [mozilla.org]
    • by jernst ( 617005 )
      The Vaani wiki states: "No longer an approved project" (right side of https://wiki.mozilla.org/Vaani [mozilla.org]), unfortunately.
  • by hey! ( 33014 ) on Sunday September 25, 2016 @01:25PM (#52958331) Homepage Journal

    It's semantic recognition. Like what "it" in the prior sentence means -- in this case it's mainly a grammatical placeholder, but note how the various uses of "it" in *this* sentence are different.

    The really impressive thing about Siri is how well (although still not human-well) it divines intent, not just phonemes. Add to that a massive scale attempt to get the phonetic recognition part right, and it's a bit like trying to launch a competitor to Google Maps.

  • Thanks for asking the question. I didn't know about Mycroft [mycroft.ai] until I looked for an Intelligent Personal Assistant. [wikipedia.org]
  • by pthisis ( 27352 ) on Sunday September 25, 2016 @01:36PM (#52958373) Homepage Journal

    Not yet mentioned yet is http://lucida.ai/ [lucida.ai] -- it's the successor to Sirius, and where all the ongoing development is focused.

    Major options that are mentioned elsewhere in the thread:
    https://mycroft.ai/ [mycroft.ai] (One of the most advanced,can actually be used in a pretty useful manner now, but sends snippets to Google for voice recognition--they intend to change that eventually, and they don't have a full-time open mic. Plus they aggregate audio across users so it's less identifiable as from a single source).
    https://wiki.mozilla.org/Vaani [mozilla.org] (from the Mozilla project; supposed to enter beta this month according to that page)

  • by Sarusa ( 104047 ) on Sunday September 25, 2016 @01:43PM (#52958399)

    Open Source Siri always responds with 'RTFM, noob'. Should be pretty easy.

    Yes, this joke has been brought to you by the year 2005.

  • by Anonymous Coward on Sunday September 25, 2016 @01:44PM (#52958409)

    The thing is, this really is not an open source software issue, it is more of an infrastrcuture issue. People can make the code that will handle spoken queries and return answers and do it as a community. That's not really the tricky part. What the OP is looking for though is a massive project of which code is a small part. There is voice processing, servers to maintain, lots of fine-tuning and learning to do, if we want the assistent to speak then we need voice actors, etc. Plus hours and hours of testing and trials and putting it all in an interface people will like.

    This reminds me of the "Where is the open source Facebook?" question. There are plenty of open source social network frameworks, but the code is a small part of the job. There's a massive amount of servers, advertising and social engagement that would need to happen for someone to make a new Facebook alternative. The open source code is there, it's the other parts which are missing.

    The author also seems to think most commercial software up to this point has an open equvalent, but it doesn't. Geological, accounting, mapping and tax software tends to be commercial only. There are usually no open source alternatives because it's not something you can throw together and just publish on-line. You need auditors and geologists, accountants and so on to make these things work. It's not a coding problem so much as a business/product problem.

  • by timholman ( 71886 ) on Sunday September 25, 2016 @02:14PM (#52958533)

    It's pointless to talk about creating an open-source version of Siri or Alexa unless you can explain how you're going to also create and maintain the server-side infrastructure needed to make it work. The Siri and Alexa interfaces may run on a client, but they're brain-dead without the server farms of Apple and Amazon behind them.

    A similar example from the not-too-distant past: Aaron Swartz's download of a significant chunk of the JSTOR database. Those JSTOR articles wanted to be free, right? And they were set free - copies of Swartz's JSTOR download were available in a multi-GB torrent on several sites. Swartz's entire rationale was that those articles should be freely available to everyone.

    So where is the free, open-source version of JSTOR today? It doesn't exist, because building and maintaining a server-side infrastructure that makes that database useable costs money ... which, of course, is why JSTOR required a subscription fee.

    Solve out the server-side economics, and you have a shot at building an open-source Siri. Until then, you're better off putting your open-source efforts into client-side applications.

    • Re:Communism@Home (Score:3, Interesting)

      by Anonymous Coward

      timholman's post is incredibly insightful. To get around the problem he point out, I think we need to distribute these services to the community, as the OP suggests. The TelCo's make this difficult, with restrictive terms of service. A cloud powered by millions of home users is probably the technical solution to the economic problem, but to implement it we'll need to free the fibre.

    • Re: (Score:3, Insightful)

      by r0kk3rz ( 825106 )

      Solve out the server-side economics, and you have a shot at building an open-source Siri. Until then, you're better off putting your open-source efforts into client-side applications.

      There is a new wave of decentralised open source applications occuring at the moment which changes the server-side economics considerably. Perhaps not so much in terms of something compute heavy like Siri, but certainly other bandwidth heavy things like youtube. Things like Ethereum [wikipedia.org], IPFS [wikipedia.org], ZeroNet [wikipedia.org].

  • by EmperorOfCanada ( 1332175 ) on Sunday September 25, 2016 @03:42PM (#52958877)
    It only needs two features. First is to keep cutting people off mid sentence. If you are trying to say, "Send message to John Smith." I can have it cut people off before the name John Smith.

    Then I can randomly have it just wait until the end and then say, "I can't find that person in your contacts, would you like me to search the local area for businesses of that name?" This is regardless of what their actual command was.

    What I find interesting about Siri is that it so rarely gets what I am saying correct but when I insult it, it has got that right 100% of the time. "Fuck you Siri, you useless pile of shit." or any one of the zillion creative insults that I have thrown at it have resulted in some "If I had feelings, they would be hurt." So I know that it is not my microphone. It is the pile of crap just not getting what I am saying.

    I am saying, "Call John Smith." or "Message John Smith" or "Read last message" or "Play audiobook, the John Smith Story."

    I have a twenty minute ride home from work. I once spent the entire twenty minute ride home trying to send a message to someone that said, "I will be home in 20 minutes" (except that as I tried that number was ever growing smaller.)

    Nearly the entire time it would just cut me off mid sentence. It would often be in the middle of my message. So it would end up saying "Would you like to send the message "I will"?" I was even trying to give it a run-on sentence such as IWillBeHomeIn20Minutes, so that it wouldn't pick up on a pause as the end. Then there is all the other bullshit that it sucks at. In the previous example it wouldn't confirm to whom I was sending the message. It would not allow me to change the message. So I started over and over just to see if I could get it to work. Yet as a confirmation that it was hearing me I would ask things like, "What is the second derivative of x^3+x^2+3x+9" and it would give me the correct answer.

    Then after the map program nearly continuously putting me blocks from where I really am and thus giving me terrible directions in critical situations and then trying android's siri awesome equivelant, I switched to android.

    On this note, I don't think that apple realizes how bad these missteps are getting. The fact that it took me 20 minutes to send no messages, the fact that it took me 20 minutes to remove that U2 bullshit from my phone, the fact that I can't remove BS apps from my phone, the fact that iTunes nearly always is jumping to music and movies (both on the phone and the desktop) when I am clearly not looking for either (such as when I am looking for a podcast). The fact that my mac pro(not macbook but my $6,000 dollar mac pro) is shoving iCloud down my throat. The fact that I can't repair half of this shit without using magic tools. The fact that little things like some extra memory costs about as much as a cheap version of the same device. All totals up to my typing this on a completely kick ass windows desktop that is presently charging my completely kick ass huge screened Android phone that I rooted and easily removed all the BS from.

    While I am seemingly a single customer, I am also in charge of the purchasing for a large company. A company where I switched many of the execs and programmers to Apple. A switch that I am now reversing. Do I hate apple? Nope. The key is that Apple is no longer working for me, the devices that I bought weren't my servants, but little apple salesmen. Then there are things like XCode that was no longer really encouraging me to do things as a professional programmer, but trying to lock me into the apple ecosystem. Oddly enough this is why I originally left windows and microsoft. It was all about .net and getting me to become a sharepoint/MS salesmen. But now things like Visual Studio allow me to program for my Android and iOS just slick as can be. They are tools that work for me.

    Can you imagine a carpenter who got a hammer that would only hammer mastercraft nails? Or a hammer that regularly missed the nail regardless of your skill with a hammer?
    • Can you imagine a carpenter who got a hammer that would only hammer mastercraft nails? Or a hammer that regularly missed the nail regardless of your skill with a hammer?

      What would you expect from a Mastercrap hammer? You know you bought it from Crappy Tire right?

      • I heard someone call it Cambodian Tire in a recent video. I should have used a higher quality example. I would be surprised if a mastercraft hammer could do any nails.
  • Think around, not through. What we want is efficient, intuitive and reliable human computer communication. If voice recognition is that hard, with many facepalm inducing errors, it is a stupid way to go. It is easier for humans to adapt to the machine. This means artificial dialects and simple AI and a bit of human training. Human consumers are lazy and want magic. Apple and MS try to grab them with the illusion of magic. It would be better for the free software to research what changes to speaking habits m

    • Think around, not through. What we want is efficient, intuitive and reliable human computer communication. If voice recognition is that hard, with many facepalm inducing errors, it is a stupid way to go. It is easier for humans to adapt to the machine. This means artificial dialects and simple AI and a bit of human training. Human consumers are lazy and want magic. Apple and MS try to grab them with the illusion of magic. It would be better for the free software to research what changes to speaking habits make the software component easier, then write howtos and youtube guides as to how to speak to it.

      Reminds me of the Palm Pilot and Graffiti. Rather than try to recognize normal handwriting like Microsoft was doing, Jeff Hawkins designed a simplified single stroke character representation that was very to recognize in software.

  • The trouble is, unlike software development which is free (if you don't value your time), implementing an open source siri would require a data center fill with servers and this costs money. The fundamental problem is software development creates value while an open source siri is a cost center. Wikipedia would probably be a good candidate to pick up this task because they are already familiar with the open source cost center model, they are a knowledge database, and they already have the server infrastruct

  • Whoever is designing such a system needs to remember to keep it client-side.

    Given the ridiculous amount of processing power available on even low-end phones and tablets now there's really no excuse to rely on the horrible latency and dependence that comes with server based voice recognition.

    Any voice processing that relies on server-side processing has already failed.

  • Well, we'll need a voice to text generator. Then we'll need some kind of AIML handler. Finally, a text to voice generator. IBM use to sell a Voice-To-Text interface card in the late 1990's. Text to Voice is a small software routine these days.

    The Machine Learning part is the intriguing part. Books have been, and will continue to be written on this. What the hard part is, "Is how can a computer program find a valid fact, and be able to defend that the fact is valid?"
  • Does anyone know of something along these lines that can run without an internet connection?

    Something that you could ask the status of a gpio pin, state values, or even ask it to tell you a joke (from a predefined list)?

    • Short answer no: The voice model and is to large to conveniently run locally. Any data needed to formulate an answer (prices of products, driving instructions, jokes, baseball scores, count down timers, etc) has to be accessible to the voice computer.
      • Damn. I've been hoping for a somewhat primitive voice recognition/synth that could run on microcontrollers or similar. Limited responses and very primitive, sure, but it would be fun to integrate into some of my projects.

  • I am an Android user, and have used Google Now, but had not tried Siri until very recently, when it was bundled with the latest macOS Sierra release. So far, I have been less than impressed with both Google Now and Siri, and after trying Siri for three days on my computer, I turned off that functionality altogether, because it was not as helpful as I had expected a voice interface to be. So, I would like to know who's building a better, open source voice interface (as opposed to merely recreating Siri or Go
    • by Dog-Cow ( 21281 )

      I imagine Siri is much more useful on a phone than on a standard laptop or desktop system. I use it all the time on my phone, but I don't expect it to be at all useful on my mini.

      Being able to ask "how many tablespoons in half a cup?" and get a spoken answer, is really useful, especially if I'm in the middle of cooking at the time.

  • Mycroft.ai
  • We're moving to a world of voice interactions processed by AI.

    We are? I honestly haven't noticed.

  • "And even with that data, Siri gets trained with a massive farm of GPUs running 24/7 -- but how can the open source community replicate that?"

Genetics explains why you look like your father, and if you don't, why you should.

Working...