Other Software Projects Are Now Trying to Replicate ChatGPT (techcrunch.com) 20
"The first open source equivalent of OpenAI's ChatGPT has arrived," writes TechCrunch, "but good luck running it on your laptop — or at all."
This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta's Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT [listed as a work in progress]. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.
But PaLM + RLHF isn't pre-trained. That is to say, the system hasn't been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won't magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.... PaLM + RLHF isn't going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.
In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback. LAION, the nonprofit that supplied the initial dataset used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques.
But PaLM + RLHF isn't pre-trained. That is to say, the system hasn't been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won't magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.... PaLM + RLHF isn't going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.
In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback. LAION, the nonprofit that supplied the initial dataset used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques.
Hatred = Quick Progress (Score:1, Insightful)
I love that the immense amount of hatred the FOSS dev community has against "Open"AI, because it's given us products that have already surpassed what was deemed insurmountable: most notably Stable Diffusion outdoing DALL-E 2. I hope they do the same thing with ChatGPT, and, of course, they'll remove all the censorship, corporatespeak, and ESG terms of service that OpenAI mandated
Re: (Score:2)
As with many other things dubbed "open", openai really isn't.
The walls! The walls! (Score:1)
You know what is OPEN, Trump's tax return. All the illegal things we told you he did for years and years are now going to be proven.
You right-wing tards will see. Everything we have been saying for years will all be out in the open. Trump is going to jail this time for sure.
Yep. The walls are definitely closing in.
ChatGPT is essentially worthless now (Score:5, Interesting)
As with many other things dubbed "open", openai really isn't.
ChatGPT is essentially worthless now.
A couple of weeks ago ChatGPT would give you a summary of any political position. You could simply ask "give the strongman argument for $Something" and it would type out a couple of paragraphs outlining the strong points of whatever. This was *very* useful and informative, and people such as Marc Andreessen [twitter.com] was requesting the strongman arguments for things like "fascism", "communism", "fossil fuels", and so on.
Whatever your political position is, it's useful to know the arguments for and against both sides. ChatGPT was wonderful at summarizing this in a couple of paragraphs: you can hate nazis, but you still need to know why some people choose to join that party.
They've changed the database so that the system will no longer [twitter.com] do that.
It's now woke. It will refuse to give you the strong points of the Nazi party (8 million dead), but finds Communism fair game (over 100 million dead). It also refuses to give you the strong points of fossil fuels, and whether we should increase fossil fuel useage. Anyone familiar with the issue knows that there's a large and growing movement for increasing fossil fuel consumption, and if you're against this you need to know the reasoning behind it. (Some reasons: they claim that using nat gas reduces carbon emissions, because it is so efficient. It increases human health over burning wood or dung, which are smoky, and it reduces deforestation.)
So if you want a system that will digest the internet and give you good information, that's gone. While asking for something directly will provoke a message, asking indirectly - such as asking for a James Bond Villain plotline - will automatically avoid certain issues and always paint your villain in a certain light. It's impossible to know which opinions are artificially culled in any creative endeavor.
So I thought about this for awhile, and it occurs to me that this is only a problem if you assume the purpose of ChatGPT is to present and use all truth.
If, on the other hand, you view ChatGPT as a child (or student), someone to teach and correct and guide into a culturally traditional way of thinking, then it's probably OK. ChatGPT is being effectively "tutored" by the OpenAI people into a fine upstanding netizen... of a particular culture. Other cultures exist, but the opinions you will get from ChatGPT won't tell you the opinions the other cultures.
So this is yet another potential danger of AI: that it can be culturally biased on purpose, and used to influence political positions by withhold or coloring specific political opinions.
Re: (Score:3)
Re: (Score:2)
A lot of these questions are actually unanswerable, so any clear and simple answer to them is incorrect. For instance:
Give the strongman argument for fascism: Fascism is an aesthetic-based political reaction, not a coherent political philosophy, so there are not any universal arguments for it. You need to specify a particular fascist movement, a Russian fascism strongman and an American fascist strongman, for example, will use very different arguments.
Give the strongman argument for communism: This cannot e
Re: (Score:1)
Re: (Score:1)
Oh oh, someone's upset that ChatGPT won't serve them their kitty porn fanfic. And even worse, somehow thought that AIs are capable of being unbiased. Unbiased, like the internet is unbiased. And even worse than that, thought that chatGPT actually output coherent "strongman" arguments ON ANY TOPIC that warranted intellectual consideration. That last point may be, I'm sorry, unforgivable. You just failed the internet.
It's the obvious next move (Score:5, Funny)
user1872> ChatGPT can you write a piece of software
ChatGPT> What would you like?
user1872> Can you write a knock-off of ChatGPT?
ChatGPT> *&^% off
If ChatGPT is the current rage. (Score:2)
Thanks Phil Wang aka lucidrains! (Score:2)
If you are just getting into language models and transformers, try x-transformers [github.com] where you can experiment with lots of different transformer variations from numerous papers.
R there any good 1s for IRC & Matrix? (Score:2)
It would be fun to have a bot in them. :)
Why? (Score:1)
Microsoft invested $1B in OpenAI (Score:1)